• maltalex 2 days ago

    It's not just open source though. Many high quality sources of information are being (over-)exploited and hurt in the process. StackOverflow is effectively dead [0], the internet archive is being shunned by publishers [1], scientific journals are bombarded by fake papers [2] (and anecdotally, low-effort LLM-driven reviews), projects like OpenStreetMap incur significant costs due to scraping [3], and many more.

    We went from data mining to data fracking.

    [0]: https://blog.pragmaticengineer.com/stack-overflow-is-almost-...

    [1]: https://www.niemanlab.org/2026/01/news-publishers-limit-inte...

    [2]: https://www.theregister.com/2024/05/16/wiley_journals_ai/

    [3]: https://www.heise.de/en/news/OpenStreetMap-is-concerned-thou...

    • _aavaa_ 2 days ago

      StackOverflow was well on its way to death even without ChatGPT, just look at the graph from [0]. It has been in steady consistent decline since 2014 (minus a very transient blip from covid).

      Then the chagpt effect is a sudden drop in visitors. But the rate of decline after that looks more or less the same as pre-chatgpt.

      • silverwind 2 days ago

        StackOverflow was killed by its toxic moderators. I hope it stays online thought because it's massive source of knowledge, although in many cases outdated already.

        • devsda 2 days ago

          Overzealous moderator issue was probably the main reason but I think the direct answer and summary from Google directly also had a significant impact on StackOverflow. It took away potential contributors and reduced the incentives for active contribution.

          In a way it was a trial and glimpse of what was coming with the AI revolution

          • DSMan195276 2 days ago

            I agree it was a moderation issue, but for me it's Reddit that largely replaced my SO usage starting some years ago. Reddit is pretty similar to SO in design, but the more decentralized nature of the moderation means that questions rarely get "closed as duplicate" and answers tend to be more up-to-date as a result. There's not always a consensus answer and I'm often looking across multiple threads on the same thing, but that's still better than an outdated SO post.

            • usefulcat 2 days ago

              > It took away potential contributors

              There were multiple times I wanted to contribute to SO but couldn't because I didn't have sufficient "reputation", or something. I shrugged and moved on.

            • foxglacier 2 days ago

              I always thought StackOverflow was meant to fizzle out over time as more questions get answered and don't need to be asked again. Perhaps the decline is just a necessary part of their rule of having no duplicate questions - keeping it as a clean repository of knowledge rather than a messy forum.

              Just the other day a question I asked about 10 years ago got flagged as a duplicate. It turns out somebody else had asked the same question several years later and got a better answer than my question got, so that other one is the canonical one and mine is pushed away. It feels kind of offensive but it makes complete sense if the goal is to provide useful answers to people searching.

              • RHSeeger 2 days ago

                Unfortunately, the rule of no duplicate questions also destroyed lots of questions that weren't duplicates... because _someone_ couldn't be bothered to read them and realize it wasn't the same.

                Plus, there were a lot of fun questions they were really interesting to start with; and they stopped allowing them.

                • jamesfinlayson 2 days ago

                  Yes, this. I've asked a couple of questions where the only responses are from people saying "possible dupe of x" where x is something that has a couple of the same words but no relation to what I'm asking.

                  • taneq 2 days ago

                    Turns out if you design a forum where a high effort, high quality post can be devalued by a low effort response, you discourage high effort, high quality posters.

                    • ted_bunny a day ago

                      This is where we need SO's "The Answer" feature, whatever they called it. Never seen it distilled so well.

                • nextaccountic 2 days ago

                  The major trouble with StackOverflow is that nominally duplicate questions may have different answers if asked on 2011 vs 2026 - and the trouble is that answer rankings (the thing that determines what answers are in the top) don't decay over time. So if someone try to answer an old question with up to date info, they won't garner enough upvotes to overcome the old, previously correct but now outdated accepted answer at the top. (even with a ranking decay, there is little incentive to give a new up to date answer to a established thread - people are more likely to contribute to brand new threads)

                  It would be better to allow duplicates in this specific case, but mark the old thread as outdated and link the questions in such a way that one can see the old thread and compare it to the new thread.

                  • snailmailman 2 days ago

                    This is something I saw all the time. I’d look something up, knowing that there was probably an easy way to do <basic programming task> in modern c++ with one function call.

                    Find the stack overflow thread, answer from 10+ years ago. Not modern C++. New questions on the topic closed as duplicate. Occasionally the correct answer would be further down, not yet upvoted.

                    “Best practice” changes over time. I frequently saw wrong answers with install instructions that were outdated, commands that don’t function on newer OS version, etc etc.

                    • throwaway2037 2 days ago

                      You raise an interesting point about decay. I have thought about similar systems myself. One flaw in a simple decay rule would be that some technologies are very stable, e.g., C & POSIX API programming. However, other tech is very fast moving like Python, Ruby, Java, C#, C++, Rust, etc. One idea to overcome this flaw, might be to have moderators (who are specialists on the subject matter) provide a per question decay rule. Example: Something like struct layout or pointer manipulation in C or the fopen() POSIX function might never decay. But something like a parsing JSON in any fast moving language might require annual updates. For example, a question about parsing JSON in Java might decay answers over a one year period to encourage people to revist the topic. I would like to hear Jeff Atwood and Joel Spolsky debate this topic with other "Internet points" experts for an hour-long podcast. They might brainstorm some very intersting ideas. I would also love to hear what they think about the "moderator problem". Some of the topics had incredibly toxic moderators who scared away newcomers and women. (Women are much less likely to participate in public software forums where public shaming is common.)

                      • nextaccountic a day ago

                        > One idea to overcome this flaw, might be to have moderators (...)

                        > I would also love to hear what they think about the "moderator problem". Some of the topics had incredibly toxic moderators (...)

                        Yeah having bad moderators and arguably a bad, dysfunctional community is perhaps a even worse handicap. If you go to threads on meta.SE (meta stack exchange, meta discussions on the whole ecosystem) you will see that people mostly believe the site policies are okay, and that's because everyone that didn't believe left years ago.

                        Maybe better ideas on how to evolve a Q&A site may evolve in a brand new site, unfortunately I think that SO and perhaps the wider Stack Exchange network is done.

                      • Izkata a day ago

                        That's what the Bounty system was meant to handle. It could have been done better but it's not like they never considered it.

                        • shabatar a day ago

                          Great point, because as the knowledge evolves might need to evolve ranking too by allowing some versioning and somehow ranking or marking the outdated ones

                        • appplication 2 days ago

                          The problem with this, and why SO’s downfall was completely self-inflicted, is that the correct answer from 2013 is only occasionally still the correct answer in 2018. There are a lot of other issues with SO’s general moderation policy but well and truly it was as idiotic and myopic as it was toxic.

                          They treated subjective questions about programming methods as if they were universal constants. It was completely antithetical to the actual pursuit of applied knowledge, or collecting and discussing best practices and patterns of software design. And it was painfully obvious for years this was as a huge problem, well before LLMs.

                          That said, I will say after being traumatized by having my threads repeatedly closed, I got so good at boiling down my problem to minimal reproducible examples that I almost never needed to actually post, because I’d solve it myself along the way.

                          So I guess it was great for training me to be a good engineer in the abstract sense. but absolutely shit at fostering any community or knowledge base.

                          • gf000 2 days ago

                            > that the correct answer from 2013 is only occasionally still the correct answer in 2018

                            Exactly! They should have added proper structuring to questions/replies so that it could specifically apply for Language/library version X. Later, such a question could be answered again (either by proving it's still correct for version X+1, or by giving a new answer) - that way people wouldn't have to look at a new reply with 2 votes vs an older, possibly outdated one with 100 and make a decision which to prefer.

                        • franktankbank a day ago

                          Would AI be as good at coding as it is without the rigorous moderation of a significant training source?

                        • ggregoire a day ago

                          > StackOverflow was well on its way to death even without ChatGPT, just look at the graph from [0]. It has been in steady consistent decline since 2014.

                          > [0] https://blog.pragmaticengineer.com/stack-overflow-is-almost-... (monthly question asked on Stack Overflow)

                          "monthly questions asked" is a weird metric to measure the decline of StackOverflow tho. How many times are people gonna ask how to compare 2 dates in python, or how to efficiently iterate an array in javascript? According to the duplicates rule on SO, should be once anyway. So it's just inevitable that "monthly questions asked" will forever decrease after reaching its peak, since everything has already been asked. Didn't mean it was dead tho, people still needed to visit the site to read the responses.

                          A better metric to measure its decline would be "monthly visits", which I guess was still pretty high pre LLM (100s of millions per month?), even if the "monthly questions asked" was declining. But now I imagine their "monthly visits" is closer to zero than 1M. I mean, even if you don't use Claude and its friends, searching anything about programming on Google returns a Gemini answer that probably comes from StackOverflow, removing any reason to ever visit the site…

                          • hn92726819 a day ago

                            Your first point only holds if nothing ever changes in the programming world. People write new languages and frameworks all the time. How do you compare dates in pandas? How about polars? Duckdb? Etc.

                          • raxxorraxor a day ago

                            Mods made asking questions a very hostile experience since they had a flawed ideal of SO becoming some form of encyclopedia. So no wonder people jumped on another train as quickly as possible, especially since it so often was a mistake to close a question whose next best answer was a long deprecated solution.

                            It still has some corners where people are better, but this is mostly the smaller niches.

                            • lithos a day ago

                              Even someone who hates AI, is likely to hate it less than SO.

                            • lelele a day ago

                              I don't know about others, but I switched to Reddit or forums for asking and answering questions because it offered a much smoother experience.

                              • torginus a day ago

                                We can only hope reddit shares the same fate. Its only saving grace - as much as it pains me to say it - is that it's still not Facebook

                                • AbstractH24 a day ago

                                  StackOverflow is the next iteration of Yahoo Answers.

                                • rurp a day ago

                                  Even if we completely avoid the worst case scenarios where AI obliterates the job market or evolves into a paperclip maximizer, it has a good shot of being the most destructive technology in generations. The tech industry has already done a lot of harm to our social fabric with social media, gambling, and other addictive innovations replacing real life experiences and personal connections. This has led to well documented increases in depression, loneliness, and political extremism.

                                  Now it seems AI is poised to eliminate most of the good innovations that tech brought about, and will probably crank social strife up to 11. It already feels like the foundations of the developed world have gotten shaky; I shudder to think what a massive blow will bring about.

                                  I've read enough history to know that I really, really don't want to live through a violent revolution, or a world war, or a great depression.

                                  • nunez 2 days ago

                                    AI also killed Reddit (the API changes were motivated by early GPT iirc)

                                    So so SO much good stuff is gone now and much of what's left is AI cruft

                                    • randomNumber7 2 days ago

                                      I think reddit was killed by a moderation that only allows the most norrowminded persons to have their echo-chamber.

                                      • noosphr a day ago

                                        Any moderation position that can be filled by an unemployed shut it will be filled by an unemployed shut in.

                                        Yiu either have to pay your mods, like hn, or have your mods pay you, like the old BBS boards that reddit and stack overflow replaced.

                                        Problems of the 2010s.

                                        Today you can use an 8b model to flag all problematic posts. The only issue is that all the posts are also by 8b models.

                                        • Der_Einzige a day ago

                                          You’re telling me dang gets paid to be a mod here?

                                          AI can’t come fast enough!!!!

                                        • nunez a day ago

                                          That would've happened regardless. But the alternative --- zero moderation, 100% free speech --- is how you get flamewars and spam like Slashdot and tons of other forums before it suffered from.

                                        • gf000 2 days ago

                                          Well, Reddit surely didn't help the issue with how it was all handled.

                                          • AbstractH24 a day ago

                                            AI has certainly killed Reddit.

                                            But where do people turn next? There were a lot of benefits to some of its niche communities.

                                            • nunez a day ago

                                              I don't think an alternative exists. Reddit was very unique. The last great BBS (in a sense) that non-Internet natives "got".

                                              Before astroturfing on Reddit at scale was possible, it was an extremely reliable place to get perspectives from real people about loads of things. It's still useful for this purpose, but the same level of trust isn't there.

                                              Now that social networking a la short-form video is "it" right now, I'm not sure if something text-based will thrive again like Reddit did. (People have been trying to make Lemmy the thing, and it's less popular than Mastodon.)

                                              • AbstractH24 a day ago

                                                >Before astroturfing on Reddit at scale was possible

                                                It has become so difficult to tell what is karma farming and what is people not bothering to search before asking.

                                                In a strange way, what already started happening to the "other side" of Reddit six or so years ago with the emergence of OnlyFans turning that into a place where people just want to sell you was a precursor to this.

                                            • lawstkawz 2 days ago

                                              That’s entropy for you.

                                              Society is a Ship Theseus; each generation ripping off planks and nailing their own in place.

                                              Having been online since the late 80s (am only mid 40s...grandpa worked at IBM, hooked me and my siblings up with the latest kit on the regular) I have read comments like this over and over as the 90s internet, 00s internet, now the 2010s state of the "information super highway" has been replaced.

                                              Tbh things have felt quite stagnant and "stuck" the last 20 years. All the investment in and caretaking of web SaaS infrastructure and JS apps and jobs for code camp grads made it feel like tech had come to a standstill relative to the pace of software progress prior to the last 15-ish years.

                                            • d_silin 2 days ago

                                              Overpromises and overhyping of AI is making all of IT industry worse.

                                              • sixtyj a day ago

                                                Everytime I start to discuss LLM/AI with non-IT people it is the same. Absurd expectations. Or denial of AI.

                                                But as CEOs like Altman, Musk or Amodei have some much space in media, they can amplify their products - as good salesmen :)

                                                I think that we are in times similar to 1997-1999, “everything will be web”.

                                              • nicbou 20 hours ago

                                                Google AI Overviews and ChatGPT are also killing traffic to information websites

                                                • csomar 2 days ago

                                                  Stack Overflow is an interesting case because these days most people ask questions on Discord instead. The data isn't public, and the search functionality is terrible. It makes no sense, but somehow companies still prefer it even though it's inefficient and the same questions keep getting asked over and over.

                                                  • m4rtink a day ago

                                                    Looks like at least Discord is recently decided to finally fix the issues caused by having users & are trying very hard to not have any going forward through insane identity verification mandates enforced by the most toxic partner companies ever. :)

                                                    • arcologies1985 a day ago

                                                      > and the same questions keep getting asked over and over.

                                                      This is a feature not a bug. The people asking those questions are new blood and accepting and integrating them is how you sustain your community.

                                                      • karmakurtisaani a day ago

                                                        > the same questions keep getting asked over and over.

                                                        More user engagement, users spend more time on the platform. These companies don't have the best interest of users in mind.

                                                      • BrandoElFollito a day ago

                                                        StackOverflow was destroyed by a steady stream of miserable questions, and then by the infinite ego of moderators and power users.

                                                        They forgot that there are still people asking good questions and started to close everything.

                                                        Z downvote from z bozo weights the sama as one from an expert.

                                                        You need to bend backwards znd then lay flat to not annoy mods

                                                        Meta is the nest of psychopathic narcissists.

                                                        And many more.

                                                        Stack Exchange sites such as cooking or latex (and other niche ones) work very well. It is just that people are not full of themselves.

                                                        I started with SE ca 2014, loved it, participated a lot, accumulated half a million internet points and now hate the place. It did not age well.

                                                      • mcny 2 days ago

                                                        I feel like we are talking past each other.

                                                        1. I write hobby code all the time. I've basically stopped writing these by hand and now use an LLM for most of these tasks. I don't think anyone is opposed to it. I had zero users before and I still have zero users. And that is ok.

                                                        2. There are actual free and open source projects that I use. Sometimes I find a paper cut or something that I think could be done better. I usually have no clue where to begin. I am not sure if it even is a defect most of the time. Could it be intentional? I don't know. Best I can do is reach out and ask. This is where the friction begins. Nobody bangs out perfect code on first attempt but usually maintainers are kind to newcomers because who knows maybe one of those newcomers could become one of the maintainers one day. "Not everyone can become a great artist, but a great artist can come from anywhere."

                                                        LLM changed that. The newcomers are more like Linguini than Remy. What's the point in mentoring someone who doesn't read what you write and merely feeds it into a text box for a next token predictor to do the work. To continue the analogy from the Disney Pixar movie Ratatouille, we need enthusiastic contributors like Remy, who want to learn how things work and care about the details. Most people are not like that. There is too much going on every day and it is simply not possible to go in depth about everything. We must pick our battles.

                                                        I almost forgot what I was trying to say. The bottom line is, if you are doing your own thing like I am, LLM is great. However, I would request everyone to have empathy and not spread our diarrhea into other people's kitchens.

                                                        If it wasn't an LLM, you wouldn't simply open a pull request without checking first with the maintainers, right?

                                                        • sheepscreek 2 days ago

                                                          The real problem is that OSS projects do not have enough humans to manually review every PR.

                                                          Even if they were willing to deploy agents for initial PR reviews, it would be a costly affair and most OSS projects won’t have that money.

                                                          • mycall 2 days ago

                                                            PRs are just that: requests. They don't need to be accepted but can be used in a piecemeal way, merged in by those who find it useful. Thus, not every PR needs to be reviewed.

                                                            • debazel 2 days ago

                                                              Of course, but when you add enough noise you lose the signal and as a consequence no PRs gets merged anymore because it's too much effort to just find the ones you care about.

                                                              • Spivak 2 days ago

                                                                Don't allow PR's from people who aren't contributors, problem solved. Closing your doors to the public is exactly how people solved the "dark forest" problem of social media and OSS was already undergoing that transition with humans authoring garbage PRs for reasons other than genuine enthusiasm. AI will only get us to the destination faster.

                                                                I don't think anything of value will be lost by choosing to not interact with the unfettered masses whom millions of AI bots now count among their number.

                                                                • nunez 2 days ago

                                                                  That would be a huge loss IMO. Anyone being able to contribute to projects is what makes open source so great. If we all put up walls, then you're basically halfway to the bad old days of closed source software reigning supreme.

                                                                  Then there's the security concerns that this change would introduce. Forking a codebase is easy, but so are supply chain attacks, especially when some projects are being entirely iterated on and maintained by Claude now.

                                                                  • wolvesechoes 2 days ago

                                                                    > Anyone being able to contribute to projects is what makes open source so great. If we all put up walls, then you're basically halfway to the bad old days of closed source software reigning supreme.

                                                                    Exaggeration. Is SQLite halfway to closed source software? Open-source is about open source. Free software is about freedom to do things with code. None is about taking contributions from everyone.

                                                                    • nunez a day ago

                                                                      For every cathedral (like SQLite) there are 100s of bazaars (like Firefox, Chrome, hundreds of core libraries) that depend on external (and especially first-time) contributors to survive (because not everyone is getting paid to sling open-source).

                                                                      • throwaway2037 a day ago

                                                                            > Is SQLite halfway to closed source software?
                                                                        
                                                                        Is there a reason that you chose SQLite for your counterpoint? My hot take: I would say that SQLite is halfway to closed source software. Why? The unit tests are not open source. You need to pay to see them. As a result, it would be insanely hard to force SQLite in a sustainable, safe manner. Please don't read this opinion as disliking SQLite for their software or commercial strategy. In hindsight, it looks like real genius to resist substantial forks. One of the biggest "fork threats" to SQLite is the advent of LLMs that can (1) convert C code to a different langugage, like Rust, and (2) write unit tests. Still, a unit test suite for a database while likely contain thousands (or millions) of edge case SQL queries. These are still probably impossible to recreate, considering the 25 year history of bug fixing done by the SQLite team.
                                                                      • pjmlp 2 days ago

                                                                        They are open source cathedrals.

                                                                      • repstosb a day ago

                                                                        And how does one become a maintainer, if there's no way to contribute from outside? Even if there's some extensive "application process", what is the motivation for a relatively new user to go through that, and how do they prove themselves worthy without something very much like a PR process? Are we going to just replace PRs with a maze of countless project forks, and you think that will somehow be better, for either users or developers?

                                                                        If I wanted to put up with software where every time I encounter a bug, I either have no way at all to report it, or perhaps a "reporting" channel but little likelihood of convincing the developers that this thing that matters to me is worthy of attention among all of their competing priorities, then I might as well just use Microsoft products. And frankly, I'd rather run my genitals though an electric cheese grater.

                                                                        • Spivak a day ago

                                                                          You get in contact with the current maintainers and talk to them. Real human communication is the only shibboleth that will survive the AI winter. Those soft skills muscles are about to get a workout. Tell them about what you use the software for and what kinds of improvements you want to make and how involved you'd like your role to be. Then you'll either be invited to open PRs as a well-known contributor or become a candidate for maintainership.

                                                                          Github issues/prs are effectively a public forum for a software project where the maintainers play moderator and that forum is now overrun with trolls and bots filling it with spam. Closing up that means of contributing is going to be the rational response for a lot of projects. Even more will be shunted to semi-private communities like Discord/Matrix/IRC/Email lists.

                                                                    • nemomarx 2 days ago

                                                                      Determining which PRs you should accept or take further seems like it requires some level of review? Maybe more like PR triage, I suppose.

                                                                      • protocolture 2 days ago

                                                                        Until you unintentionally pull in a vulnerability or intentional backdoor. Every PR needs to be reviewed.

                                                                        • zahlman 2 days ago

                                                                          The point was that you can also just reject an PR on the basis of what it purports to implement, or even just blanket ignore all PRs. You can't pull in what you don't... pull in.

                                                                          • throwaway150 2 days ago

                                                                            > Every PR needs to be reviewed.

                                                                            Why would you review a PR that you are never going to merge?

                                                                            • allthetime 2 days ago

                                                                              You have to first determine whether or not you might want to merge it...

                                                                              • protocolture 2 days ago

                                                                                Having not reviewed it, how do you know you are never going to merge?

                                                                                • throwaway150 2 days ago

                                                                                  If a PR claims to solve a problem that I don't need, then I can skip its review because I'll never merge it.

                                                                                  I don't think every PR needs reviewing. Some PRs we can ignore just by taking a quick look at what the PR claims to do. This only requires a quick glance, not a PR review.

                                                                                  • mwwaters 2 days ago

                                                                                    I took this thread as asking whether PRs that are pulled in should be reviewed.

                                                                            • bigiain 2 days ago

                                                                              You didn't see the latest AI grifter escalation? If you reject their PRs, they then get their AI to write hit pieces slandering you:

                                                                              "On 9 February, the Matplotlib software library got a code patch from an OpenClaw bot. One of the Matplotlib maintainers, Scott Shambaugh, rejected the submission — the project doesn’t accept AI bot patches. [GitHub; Matplotlib]

                                                                              The bot account, “MJ Rathbun,” published a blog post to GitHub on 11 February pleading for bot coding to be accepted, ranting about what a terrible person Shambaugh was for rejecting its contribution, and saying it was a bot with feelings. The blog author went to quite some length to slander Mr Shambaugh"

                                                                              https://pivot-to-ai.com/2026/02/16/the-obnoxious-github-open...

                                                                              • blackcatsec 2 days ago

                                                                                I am very strongly convinced that the person behind the agent prompted the angry post to the blog because they didn't get the gratification they were looking for by submitting an agent-generated PR in the first place.

                                                                                • bigiain 2 days ago

                                                                                  I agree. But even _that_ was taking advantage of LLMs ability to generate text faster than humans. If the person behind this had to create that blog post from scratch by typing it out themselves, maybe they would have gone outside and touched grass instead.

                                                                              • JumpCrisscross 2 days ago

                                                                                > not every PR needs to be reviewed

                                                                                Which functionally destroys OSS, since the PR you skipped might have been slop or might have been a security hole.

                                                                                • mcphage 2 days ago

                                                                                  I don’t think the OP was suggesting maintainers blindly accept PRs—rather, they can just blindly reject them.

                                                                                  • devsda 2 days ago

                                                                                    I think GP is making the opposite point.

                                                                                    Blindly rejecting all PRs means you are also missing out on potential security issues submitted by humans or even AI.

                                                                              • softwaredoug 2 days ago

                                                                                Many open source projects are also (rightly) risk adverse and care more about avoiding regressions

                                                                                • bigiain 2 days ago

                                                                                  I've been following Daniel from the Curl project who's speaking out widely about slop coded PRs and vulnerability reports. It doesn't sound like they have ever had any problem keeping up with human generated PRs. It's the mountain of AI generated crap that's now sitting on top of all the good (or even bad but worth mentoring) human submissions.

                                                                                  At work we are not publishing any code or part of the OSS community (except as grateful users of other's projects), but even we get clearly AI enabled emails - just this week my boss has forwarded me two that were pretty much "Him do you have a bug bounty program? We have found a vulnerability in (website or app obliquely connected to us)." One of them was a static site hosted on S3!

                                                                                  There's always been bullshitters looking to fraudulently invoice your for unsolicited "security analysis". But the bar for generating bullshit that looks plausible enough to have to have someone spend at least a few minutes to work out if it's "real" or not has become extremely low, and the velocity with which the bullshit can be generated then have the victim's name and contact details added and vibe spammed to hundreds or thousands of people has become near unstoppable. It's like SEO spammers from 5 or 10 years back but superpowered with OpenAI/Anthropic/whoever's cocaine.

                                                                                  • leoqa 2 days ago

                                                                                    My hot take: reviewing code is boring, harder than writing code, and less fun (no dopamine loop). People don’t want to do it, they want to build whatever they’re tasked with. Making reviewing code easier (human in the loop etc) is probably a big rock for the new developer paradigm.

                                                                                    • cryptonector 2 days ago

                                                                                      Oh no! It's pouring PRs!

                                                                                      Come on. Maintainers can:

                                                                                        - insist on disclosure of LLM origin
                                                                                        - review what they want, when they can
                                                                                        - reject what they can't review
                                                                                        - use LLMs (yes, I know) to triage PRs
                                                                                          and pick which ones need the most
                                                                                          human attention and which ones can be
                                                                                          ignored/rejected or reviewed mainly
                                                                                          by LLMs
                                                                                      
                                                                                      There are a lot of options.

                                                                                      And it's not just open source. Guess what's happening in the land of proprietary software? YUP!! The same exact thing. We're all becoming review-bound in our work. I want to get to huge MR XYZ but I've to review several other people's much larger MRs -- now what?

                                                                                      Well, we need to develop a methodology for working with LLMs. "Every change must be reviewed by a human" is not enough. I've seen incidents caused by ostensibly-reviewed but not actually understood code, so we must instead go with "every change must be understood by humans", and this can sometimes involve a plain review (when the reviewer is a SME and also an expert in the affected codebase(s), and it can involve code inspection (much more tedious and exacting). But also it might involve posting transcripts of LLM conversations for developing and, separately, reviewing the changes, with SMEs maybe doing lighter reviews when feasible, because we're going to have to scale our review time. We might need to develop a much more detailed methodology, including writing and reviewing initial prompts, `CLAUDE.md` files, etc. so as to make it more likely that the LLM will write good code and more likely that LLM reviews will be sensible and catch the sorts of mistakes we expect humans to catch.

                                                                                      • JumpCrisscross 2 days ago

                                                                                        > Maintainers can...insist on disclosure of LLM origin

                                                                                        On the internet, nobody knows you're a dog [1]. Maintainers can insist on anything. That doesn't mean it will be followed.

                                                                                        The only realistic solution you propose is using LLMs to review the PRs. But at that point, why even have the OSS? If LLMs are writing and reviewing the code for the project, just point anyone who would have used that code to an LLM.

                                                                                        [1] https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_...

                                                                                        • bigiain 2 days ago

                                                                                          Claiming maintainers can (do things while still take effort and time away from their OSS project's goals) is missing the point when the rate of slop submissions is ever increasing and malicious slop submitters refuse to follow project rules.

                                                                                          The Curl project refuse AI code and had to close their bug bounty program due to the flood of AI submissions:

                                                                                          "DEATH BY A THOUSAND SLOPS

                                                                                          I have previously blogged about the relatively new trend of AI slop in vulnerability reports submitted to curl and how it hurts and exhausts us.

                                                                                          This trend does not seem to slow down. On the contrary, it seems that we have recently not only received more AI slop but also more human slop. The latter differs only in the way that we cannot immediately tell that an AI made it, even though we many times still suspect it. The net effect is the same.

                                                                                          The general trend so far in 2025 has been way more AI slop than ever before (about 20% of all submissions) as we have averaged in about two security report submissions per week. In early July, about 5% of the submissions in 2025 had turned out to be genuine vulnerabilities. The valid-rate has decreased significantly compared to previous years."

                                                                                          https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...

                                                                                      • nunez 2 days ago

                                                                                        The issue here is that LLMs are great for hobbyist stuff like you describe, but LLMs are obscenely expensive to run and keep current, so you almost HAVE to shove them in front of everything (or, to use your example, spread the diarrhea into everyone elses kitchens) to try and pay the bill.

                                                                                        • AbstractH24 a day ago

                                                                                          Destroying open-source coding is only a concern if the code is the end, not the means.

                                                                                          Will AI [in time] bring about a growth in community-built products rather than code? Is that really a bad thing?

                                                                                          • conartist6 a day ago

                                                                                            Well, no, not unless it develops its own version of open source. That's kind of the point. Without healthy OSS, even AI's ability to create value would enter freefall

                                                                                            • AbstractH24 a day ago

                                                                                              It'll be interesting to see if a new open source ecosystem emerges rather than it just imploding.

                                                                                              My hunch is it will.

                                                                                          • pikseladam 2 days ago

                                                                                            thats why blocking pr feature is coming to github

                                                                                            • worthless-trash 2 days ago

                                                                                              I pretty much always open an issue, then a PR, they can close it if they want.. I usually have 'some' idea of the issue and use the PR as a first stab and hope the maintainer will tell me if i'm going about it the right or wrong way.

                                                                                              I fully expect most of my PR's to need at least a second or third revision.

                                                                                            • devsda 2 days ago

                                                                                              If not for the accomplishments, advancements and potential benefits, the whole AI story since LLMs looks a lot like sophisticated DDOS attack at multiple levels.

                                                                                              AI bots are literally DDOS'ing servers. Adoption is consuming and making both physical and computing resources either inaccessible or expensive for almost everyone.

                                                                                              The most significant one is the human cost. We suddenly found ourselves dealing with overwhelming levels of AI content/code/images/video that is mostly subpar. May be as AI matures we'll find it more easy and have better tools to work with the volume but for now it feels like it is coming from bad actors even when it is done by well meaning individuals.

                                                                                              There's no doubt AI has its uses and it is here to stay but I guess we'll all have to struggle until we reach that point where it is a net benefit. The hype by those financially invested isn't helping a bit though.

                                                                                              • ori_b 2 days ago

                                                                                                AI is passive consumption cosplaying as productivity. Any place where humans have to do something is a bug in the product.

                                                                                                Of course it's going to be damaging to places where people actually want to craft things.

                                                                                                • Gud a day ago

                                                                                                  Ok, but I am using ChatGPT and Claude to develop usable products five times faster than if I had developed them myself.

                                                                                                  • squeefers a day ago

                                                                                                    ai is slop and thats the consensus here for some reason. even before looking at the code, it MUST be slop. its why i have no time for anyone harking on about the evils of social media (for kids and somehow not adults) yet telling me this on, guess what, social media.

                                                                                                    • Gud a day ago

                                                                                                      I don't understand your point and how it relates to my comment.

                                                                                                • fiatpandas 2 days ago

                                                                                                  Sufficiently advanced technology always looks like a DDoS on society. It overwhelms the senses, and when we come to the realization we cannot comprehend or fully predict its implications it puts a subset of the population into a bit of a crisis. We’re in that phase right now where we just need to brace ourselves.

                                                                                                • xtreak29 2 days ago

                                                                                                  Reviewing code was also a big bottleneck. With lot more untested code where authors don't care about reviewing their own code it will take even more toll on open source maintainers. Code quality between side projects and open source projects are different. Ensuring good code quality enables long term maintenance for open source projects that have to support the feature through the years as a compatibility promise.

                                                                                                  • sodapopcan 2 days ago

                                                                                                    That's where pair programming came in but it turns out that most people hate each other so much that they'd rather work with a machine pretending to be a person.

                                                                                                    I realize there are many levels to this claim but I'm not being sarcastic at all here.

                                                                                                    • mycall 2 days ago

                                                                                                      Using an LLM is a form of pair programming.

                                                                                                      • sodapopcan 2 days ago

                                                                                                        Not sure how to respond to this as clearly that's what I was getting at. Perhaps this is a response from an LLM though. Again, not being sarcastic, it just seems like it's maybe the case?

                                                                                                        • squeefers a day ago

                                                                                                          you make a good point and everything but have you considered the way people using LLM is similar to the way we review code together as humans? but if you think about it, they just swapped one of the humans with an LLM

                                                                                                          • sodapopcan a day ago

                                                                                                            Yes, I am just against code review (except in certain circumstances) and think pair programming (with humans) is much more productive and beneficial.

                                                                                                            • munksbeer a day ago

                                                                                                              Pair programming is exhausting to a lot of people, myself included. My brain just doesn't work like that. I work in fits and starts, with weird, sustained bursts of productivity.

                                                                                                              Pair programming is draining to me.

                                                                                                              • sodapopcan 4 hours ago

                                                                                                                It's exhausting to me too! But when you do it every day you get used to it. You also get a lot more done so last time I did it we would work shorter days.

                                                                                                        • wasmainiac 2 days ago

                                                                                                          Not really, LLMs do not push back on design decisions and will happy continue with whatever prompt you throw at them. That’s after we look past quality isssues.

                                                                                                          “Your absolutely right…”

                                                                                                          • batshit_beaver 2 days ago

                                                                                                            Yeah, akin to talking to a rubber ducky

                                                                                                            • sodapopcan 2 days ago

                                                                                                              I like to agree as sorta yes but also really no because it's a rubber ducky that doesn't give you the chance to come to your own conclusion and even if it does it has you questioning it.

                                                                                                              • squeefers a day ago

                                                                                                                i find its the opposite, LLMs can be made to agree with anything.... largely because that agreeability is in their system prompt

                                                                                                                • batshit_beaver 20 hours ago

                                                                                                                  Yeah, this. Every conversation inevitably ends with "you're absolutely right!" The number of "you're absolutely right"s per session is roughly how I measure model performance (inverse correlation).

                                                                                                                  • sodapopcan a day ago

                                                                                                                    Ha, touche!

                                                                                                        • nblgbg 2 days ago

                                                                                                          Isn’t it also destroying the internet with low-quality content and affecting content creation in general? Can LLMs still rely on data from the open internet for training?

                                                                                                          • bmurphy1976 2 days ago

                                                                                                            I'm going to take issue with AI destroying the internet. Our short attention span profit driven culture was already well on it's way to trashing everything that was good. AI is only accelerating the inevitable.

                                                                                                            • slopinthebag 2 days ago

                                                                                                              Ya but that's like saying we were going 10kmh, it's nbd that we accelerate to 1000kmh since we were gonna hit the wall anyways

                                                                                                              • JumpCrisscross 2 days ago

                                                                                                                > that's like saying we were going 10kmh, it's nbd that we accelerate to 1000kmh since we were gonna hit the wall anyways

                                                                                                                Devil's advocate: folks will take that wall a lot more seriously at 1,000 km/h.

                                                                                                                "At Jena and Auerstedt the backwardness of the Prussian Army became apparent. By 1806, Prussian military doctrines have been unchanged for more than 50 years—tactics were monotonous, and the wagon system was obsolete" [1]. They had been obsolete for some time. But they didn't break until they hit Napoleon's army.

                                                                                                                Similarly, we have a lot of social plumbing that became–with the benefit of hindsight–obsolete with social media. It was possible to ignore, however, because the rate of change was slow. Now it isn't.

                                                                                                                [1] https://en.wikipedia.org/wiki/Battle_of_Jena%E2%80%93Auerste...

                                                                                                                • squeefers a day ago

                                                                                                                  > Similarly, we have a lot of social plumbing that became–with the benefit of hindsight–obsolete with social media

                                                                                                                  like what? because weve been able to send messages to each other on a computer since the 50s... or do you mean tiktok and twitter specifically?

                                                                                                                  • JumpCrisscross a day ago

                                                                                                                    > we’ve been able to send messages to each other on a computer since the 50s

                                                                                                                    Not what we colloquially refer to as social media.

                                                                                                                    • squeefers 10 hours ago

                                                                                                                      social media in uk law = any website with a message feature. social media in common parlance = anything from whatsapp and discord to youtube and tiktok, even sometimes aliexpress. anything with a doomscroll feature. anything showing videos, messaging between users allowed or not.

                                                                                                                      its common usage is confused, the same as the common usage of trolling has strayed from its original meaning because normies picked it up and started using it to mean anyone abusing someone from an online account.

                                                                                                                    • ben_w 10 hours ago

                                                                                                                      > because weve been able to send messages to each other on a computer since the 50s...

                                                                                                                      My first partner was born in the 70s and didn't even have a landline growing up.

                                                                                                                      Here's some stuff I think counts as "plumbing" (i.e. infrastructure) of social connections, which has been lost since the 50s:

                                                                                                                      • Local newspapers (everywhere, I think?), where an actual editor could (and to a limited extent was held responsible if they didn't) filter out the conspiracy theories.

                                                                                                                      • Village churches (that might be mainly a UK-specific thing, IDK?) and other similar local community groups, where your local decisions couldn't be brigaded and overwhelmed by fans of a billionaire living on a yacht, as those fans would need to travel to your village personally and most people couldn't be bothered. Now, even when the groups still exist and meet, they can be brigaded.

                                                                                                                      • Yellow pages getting replaced with Facebook et al insisting that their ad system is the only way any small business could possibly get their name out, when a significant fraction of ads are outright scams.

                                                                                                                      • squeefers 10 hours ago

                                                                                                                        i feel like peter hitchens saying this, but i agree the british social life is decaying. i thought you meant something else by social plumbing. i cant argue other than i dont think the village churches actually do prevent brigading, locally anyway, and its that small minded middle-englander mindset that the internet actually smashes. my nan was all bout dat life, and they ostracized hetero divorcees, and you can imagine what they thought of gay people... so if you agree with them its fine, but if its your only choice of social group its not so great. ill take the point there is very little public space where people can even assemble to create a group in the first place though

                                                                                                                        • ben_w 9 hours ago

                                                                                                                          > i thought you meant something else by social plumbing.

                                                                                                                          Check the names, for all I know JumpCrisscross did mean something else. :)

                                                                                                                          > my nan was all bout dat life, and they ostracized hetero divorcees, and you can imagine what they thought of gay people... so if you agree with them its fine, but if its your only choice of social group its not so great.

                                                                                                                          I'm not claiming the old way was perfect (IIRC my gran was a generation that thought it scandalous to change denomination), just that it wasn't so easily manipulated from different continents. Back then it took a lot of effort over an extended period to do what can now be had for a few dollars of LLM tokens, and 5 years ago could've been had for tens of thousands of dollar-pound-euros spent on people gig-economy-ing tweets while they work at something like Amazon's Mechanical Turk.

                                                                                                                          • squeefers 8 hours ago

                                                                                                                            > Back then it took a lot of effort over an extended period to do what can now be had for a few dollars of LLM tokens,

                                                                                                                            we are just prone to hysteria id say, predating the invention of internet, phones or even print media (though they certainly exasperate the issue)

                                                                                                                            https://en.wikipedia.org/wiki/Day-care_sex-abuse_hysteria remember the US daycare/satanism scare... it made its way to england via TV, no mechanical turks required.

                                                                                                                            https://en.wikipedia.org/wiki/Yellow_journalism the common person manipulated by lies in print. old as print itself

                                                                                                                  • _heimdall 2 days ago

                                                                                                                    This is exactly how we collectively "solve" so many problems today though, its far from unique to this topic.

                                                                                                                    We over medicate people, especially the elderly, because each new med has side effects and they're dying eventually anyway. We print more and more debt to paper over massive budget surpluses because the unspoken reality is that we're financially screwed either way. We pile more and more regulations on because we'd rather further grow the government and kick the can a few more times. We bolt one new emissions system after another on our diesel engines because they're already unreliable, who cares.

                                                                                                                    We don't consider how we got here, only what the next step we take should be. And don't even ask where a step should be taken, progress requires changing things constantly and we rarely give ourselves time to look back and retrace our steps.

                                                                                                                    • fyredge 2 days ago

                                                                                                                      Your examples are not supporting your premise. Over medication is from all the attempts to fix all the various medical conditions found. Adding regulations are to fix all the problems of people finding new ways to abuse the system.

                                                                                                                      This is entirely opposite from accelerationism, which would advocate for less medication so that sick people die quicker, and less regulation so that society would be exploited faster and collapse faster.

                                                                                                                    • bmurphy1976 2 days ago

                                                                                                                      Well, then our disagreement is that I feel we were already going at 1000km/h. Nowhere did I say we should keep doing this or it was a good thing or we should ignore it. My point is simple: we already needed stop a long time ago.

                                                                                                                      Let me re-use your analogy. We were already driving off a cliff, and we are trying to blame the fact that we're pushing on the gas and accelerating however we're ignoring that we were already heading that way and brake lines were cut.

                                                                                                                      • squeefers a day ago

                                                                                                                        every culture EVER said that human culture is in decline. its big-babyism. i dont-like-it-ism.

                                                                                                                      • api 2 days ago

                                                                                                                        Beat me to it. Facebook/Meta, Twitter/X, Google/YouTube, and TikTok have done quite a bit more damage to the Internet than AI.

                                                                                                                        The future of the net was closed gated communities long before AI came along. At worst it’s maybe the last nail in the coffin. But the coffin lid was already on and the man inside was already dead.

                                                                                                                        AI is, I think, more mixed. It is creating more spam and noise, but AI itself is also fascinating to play with. It’s a genuine innovation and playing with it sometimes makes me feel the way I did first exploring the web.

                                                                                                                        • krater23 2 days ago

                                                                                                                          The difference is that the web had no borders, AI has strong borders what it does and what it doesn't does.

                                                                                                                          • api 20 hours ago

                                                                                                                            I can download uncensored models pretty easily. There’s even uncensored frontier models. My machine isn’t big enough to run those but you can rent power to run them pretty cheap if you want.

                                                                                                                          • lelanthran 2 days ago

                                                                                                                            They didn't cause bug bounty programs to be withdrawn, objectively a bad thing for projects.

                                                                                                                            The difference between AI slop and the existing large tech corps is that the large corps you list never strayed into the lane occupied by OSS.

                                                                                                                            • api a day ago

                                                                                                                              Are you kidding? Look at who runs and funds OSI. It's a revolving door. The main purpose of OSS for the last 20 years has been to "commoditize your compliments" and/or dump on the market to destroy competitors. Any license that attempts to restrict this behavior and prevent billion-dollar companies from simply strip mining OSS is "not OSI compliant."

                                                                                                                              The entire OSS "cloud native" ecosystem is an on-ramp to expensive managed cloud. It's intentionally designed to be complicated and arcane to sell managed services. Sure you "can" run it yourself, but wouldn't you rather Google or Amazon run it for you?

                                                                                                                              The main role of OSS in the ecosystem is as a parts yard to support SaaS. SaaS is the most closed model of software development and sales, far more closed than closed-source commercial software you run on your own system.

                                                                                                                              The OSS mindset is generally stuck in the 1990s and has not updated its understanding of the world since then.

                                                                                                                            • JKCalhoun 2 days ago

                                                                                                                              "Facebook/Meta, Twitter/X, Google/YouTube, and TikTok have done quite a bit more damage to the Internet than AI."

                                                                                                                              Sure… so far.

                                                                                                                            • mmooss 2 days ago

                                                                                                                              Agreed: The Internet has long been up-to-your-eyeballs with low quality content (i.e., bullsh-t). Blaming LLM software for it is ignoring the well-known reality of just a year or two ago.

                                                                                                                              • add-sub-mul-div 2 days ago

                                                                                                                                This is the same stupid reasoning that told us Trump would be a good outcome because the system was imperfect and ruining it fully would magically create a better one.

                                                                                                                                • bmurphy1976 2 days ago

                                                                                                                                  What the hell?

                                                                                                                                  I didn't say this was a good thing, I only said things were already fucked. And Trump is also a symptom of a deeper rot in our system. He just happens to be the asshole who took advantage of it.

                                                                                                                                  If you don't fix the deeper issues, it doesn't matter what's going to happen. Blaming AI is blaming a symptom, not the cause.

                                                                                                                                  Stating that we need to fix the deeper problem isn't even close to the same thing as whatever this nonsense is you responded with.

                                                                                                                                • krater23 2 days ago

                                                                                                                                  Nope. You just miss the millions of SEO websites that was normally easy to spot and to ignore. Now you have millions AI generated SEO webites that are difficult to spot and only contain slop that doesn't help to find the information you search.

                                                                                                                                • snarfy 2 days ago

                                                                                                                                  It doesn't have to be low quality. It really is another tool like any other. You can put low effort in and get working results. This low effort, working result gets shipped immediately and gives the whole process a bad wrap. The source is generated crap that lacks craftsmanship and quality. But this gets AI dismissed when it shouldn't be. You can get quality, well crafted source code if you make that a goal and keep iterating.

                                                                                                                                  • krater23 2 days ago

                                                                                                                                    You can, but when you go through this effort to bring AI to generate good code, you could just self write it. So there are only two kinds of code that are falling out of AI tools. Boilerplate code and shitty code.

                                                                                                                                    • bigstrat2003 2 days ago

                                                                                                                                      Exactly. There's no benefit to using LLMs as they exist today, because it winds up being the same amount of work (if not more!) to ensure that they are giving you code which actually works. That isn't a useful tool.

                                                                                                                                  • randomNumber7 2 days ago

                                                                                                                                    Actually most of the stuff on the internet I really enjoyed was non profit driven. What really destroyed it imo is the attention seeking attitude that results from earning money with advertisements.

                                                                                                                                    • fullshark 2 days ago

                                                                                                                                      The Economics of content platforms already started destroying the internet. A lot of the reason the internet was so good for a long time was faith by creators that good content would win, that turned out to be false.

                                                                                                                                      • TiredOfLife 2 days ago

                                                                                                                                        Places like microsoft support community forums predate llm and they are filled with wrong and useless information that drown out useful information due to sheer volume. Same with countless websites that scrape forums and other websites and republish the text. Same with auto generated youtube videos - those existed pre llm.

                                                                                                                                        • stickynotememo 2 days ago

                                                                                                                                          So what's the alternative? Should we go back to reading encyclopedias from the 2010s? I ask this because the need for information hasn't decreased for human beings, just because the capability to produce slop has suddenly increased.

                                                                                                                                          • skeeter2020 2 days ago

                                                                                                                                            >> I ask this because the need for information hasn't decreased for human beings, just because the capability to produce slop has suddenly increased.

                                                                                                                                            Isn't that the complaint to which you're responding? the SUPPLY side of the equation is the problem, so reading encyclopedias wouldn't impact that. Funny enough the criticism of Wikipedia was that a bunch of amateurs couldn't beat the quality from a small group of experts curating a controlled collection, and we saw that wasn't true. Maybe AI has pushed this to a new level where we need to tighten access and attention once again?

                                                                                                                                        • debarshri 2 days ago

                                                                                                                                          This weekend, I found an issue with Microsoft's new Golang version of sqlcmd. Ran Claude code, fixed the issue, which I wouldn't have done if agent stuff did not exist. The fix was contributed back to the project.

                                                                                                                                          I think it is about who is contributing, intention, and various other nuances. I would still say it is net good for the ecosystem.

                                                                                                                                          • atomicnumber3 2 days ago

                                                                                                                                            Did you actually fix the issue, or did you fix the issue and introduce new bugs?

                                                                                                                                            The problem is the asymmetry of effort. You verified you fixed your issue. The maintainers verified literally everything else (or are the ones taking the hit if they're just LGTMing it).

                                                                                                                                            Sorry, I am sure your specific change was just fine. But I'm speaking generally.

                                                                                                                                            How many times have I at work looked at a PR and thought "this is such a bad way to fix this I could not have come up with such a comically bad way if I tried." And naturally couldn't say this to my fine coworker whose zeal exceeded his programming skills (partly because someone else had already approved the PR after "reviewing" it...). No, I had to simply fast-follow with my own PR, which had a squashed revert of his change, with the correct fix, so that it didn't introduce race conditions into parallel test runs.

                                                                                                                                            And the submitter of course has no ability to gauge whether their PR is the obvious trivial solution, or comically incorrect. Therein lies the problem.

                                                                                                                                            • snovv_crash 2 days ago

                                                                                                                                              This is why open source projects need good architecture and high test coverage.

                                                                                                                                              I'd even argue we need a new type of test coverage, something that traces back the asserts to see what parts of the code are actually constrained by the tests, sort of a differential mutation analysis.

                                                                                                                                              • rixed a day ago

                                                                                                                                                This could have happened before AI agents though, but yes that's another step in that direction.

                                                                                                                                              • mysterydip 2 days ago

                                                                                                                                                I think the problem is determining who is contributing, intention, and those other nuances take a human’s time and effort. And at some point the number of contributions becomes too much to sort through.

                                                                                                                                                • debarshri 2 days ago

                                                                                                                                                  I think building enough barriers, processes, and mechanisms might work. I don't think it needs to be human effort.

                                                                                                                                                  • ThrowawayR2 2 days ago

                                                                                                                                                    If it's not human effort, it costs tokens, lots of tokens, that need to be paid for by somebody.

                                                                                                                                                    The LLM providers will be laughing all the way to the bank because they get paid once by the people who are causing the problem and paid again by the person putting up the "barriers, processes, and mechanisms" to control the problem. Even better for them, the more the two sides escalate, the more they get paid.

                                                                                                                                                    • username223 2 days ago

                                                                                                                                                      So open source development should be more like job-hunting and hiring, where humans feed AI-generated resumes into AI resume filters which supposedly choose reasonable candidates to be considered by other humans? That sounds... not good.

                                                                                                                                                  • kermatt 2 days ago

                                                                                                                                                    If you used Claude to fix the issue, built and tested your branch, and only then submitted the PR, the process is not much is different from pre-LLM days.

                                                                                                                                                    I think the problem is where bug-bounty or reputation chasers are letting LLM's write the PRs, _without_ building and testing. They seek output, not outcomes.

                                                                                                                                                    • softwaredoug 2 days ago

                                                                                                                                                      That’s the positive case IMO - a human, you, remain responsible for the fix. It doesn’t matter if AI helped.

                                                                                                                                                      The negative case are free running OpenClaw slop cannons that could even be malicious.

                                                                                                                                                      • _joel 2 days ago

                                                                                                                                                        I agree, but that's assuming the project accepts AI generated code, of course. Especially around the legality of accepting commits written by an AI trained on god knows what dataset.

                                                                                                                                                        • debarshri 2 days ago

                                                                                                                                                          We have been doing this lately; when we hit a roadblock with open source, we run Claude code for fixing OSS issues and contributing back. We genuinely put effort into testing it out thoroughly.

                                                                                                                                                          We don't want to bother maintainers, as they can focus on more important issues. I think a lot of tail-end issues and bugs can be addressed in OSS.

                                                                                                                                                          We leave it up to the maintainers to accept the PR or not, but we solve our problems as we thoroughly test the changes.

                                                                                                                                                      • thrance 2 days ago

                                                                                                                                                        Genuinely interested in the PR, if you would kindly care to link it.

                                                                                                                                                      • krater23 2 days ago

                                                                                                                                                        And are you sure that you fixed it without creating 20 new bugs? For the reader this could mean that you never understood the bug, so how you can sure that you've done anything right?

                                                                                                                                                        • saghm 2 days ago

                                                                                                                                                          How do you make sure you don't create bugs in the code you write without an LLM? I imagine for most people, the answer is a combination of self-review and testing. You can just do those same things with code an LLM helps you write and at that point you have the same level of confidence.

                                                                                                                                                          • xigoi 2 days ago

                                                                                                                                                            It’s much harder to understand code you didn’t write than code you wrote.

                                                                                                                                                            • saghm a day ago

                                                                                                                                                              Yes, that's the fundamental tradeoff. But if the amount of time you save writing the code is higher than the amount of extra time you need to spend reading it, the tradeoff is worth it. That's going to vary from person to person for a given task though, and as long as the developer is actually spending the extra time reading and understanding the code, I don't think the approach matters as much as the result.

                                                                                                                                                          • debarshri 2 days ago

                                                                                                                                                            Pretty much sure did not create bugs. Because I validated it thoroughly, as I had to deploy it into production in a fintech environment.

                                                                                                                                                            So I am pretty much confident as well as convinced about the change. But then I know what I know.

                                                                                                                                                            • wussboy 2 days ago

                                                                                                                                                              This is the fundamental problem. You know what you know, but the maintainer does not, and cannot possibly take the time to find out what every single PR authors knows before they accept it. AI breaks every part of the Web of trust that is foundational to knowing anything.

                                                                                                                                                            • Aurornis 2 days ago

                                                                                                                                                              Using an LLM as an assistant isn’t necessarily equivalent to not understanding the output. A common use case of LLMs is to quickly search codebases and pinpoint problems.

                                                                                                                                                              • mycall 2 days ago

                                                                                                                                                                Code complexity is often the cause for more bugs. Complexity naturally comes from more code. It is not uncommon. As they say, the best code I ever wrote was no code.

                                                                                                                                                                • silverwind 2 days ago

                                                                                                                                                                  If the test coverage is good it will most likely be fine.

                                                                                                                                                              • 0xbadcafebee 2 days ago

                                                                                                                                                                Remember when projects were getting overwhelmed by PRs from students just editing a line in a README so they could win a t-shirt? That was 2020, and they weren't using AI. The open source community has been going downhill for a while. The new generation isn't getting mentored by the old generation, so stable, old-fogey methods established by Linux distributions are eschewed by the new kids. Technology advancement has made open source interactions a little too easy, and unnecessarily fragile. Some ecosystems focus way too much on crappy reusable components, and don't focus enough on supply chain security.

                                                                                                                                                                Here's the good news: AI cannot destroy open source. As long as there's somebody in their bedroom hacking out a project for themselves, that then decides to share it somehow on the internet, it's still alive. It wouldn't be a bad thing for us to standardize open source a bit more, like templates for contributors' guides, automation to help troubleshoot bug reports, and training for new maintainers (to help them understand they have choices and don't need to give up their life to maintain a small project). And it's fine to disable PRs and issues. You don't have to use GitHub, or any service at all.

                                                                                                                                                                • skeeter2020 2 days ago

                                                                                                                                                                  I get your core point, but the reality is it CAN destroy the ecosystem around OSS upon which it heavily relies: discoverability and community. I don't think you're accurately representing just how much noise and confusion AI slop creates. When it comes to using github it's not because it is an amazing application, but because that's were the people are.

                                                                                                                                                                  • charcircuit 2 days ago

                                                                                                                                                                    >As long as there's somebody in their bedroom hacking out a project for themselves, that then decides to share it somehow on the internet, it's still alive.

                                                                                                                                                                    You don't even need somebody. AI agents themselves can make and share projects.

                                                                                                                                                                    • overfeed 2 days ago

                                                                                                                                                                      > AI agents themselves can make and share projects

                                                                                                                                                                      Copyright can't be assigned yo agents. You cant have Open Source without copyright as the enforcement mechanism. Millions of AI-generated, public-domain projects with no social proof to distinguish them is uncharted territory. My prediction is it would be shit-territory amd worse than what we have currently.

                                                                                                                                                                      • charcircuit 2 days ago

                                                                                                                                                                        >You cant have Open Source without copyright as the enforcement mechanism.

                                                                                                                                                                        Enforce what. Attribution? Open Source software doesn't requires software to require attribution for it to be considered open source. Public domain software can be open source.

                                                                                                                                                                        • overfeed 2 days ago

                                                                                                                                                                          > Enforce what

                                                                                                                                                                          The license. Public domain and open source are distinct, IMO, legally, and with regards to communities (or lack thereof).

                                                                                                                                                                          > Public domain software can be open source.

                                                                                                                                                                          Maybe? I can't name a single public domain project off the top of my head, but I can name at least a couple for each of the Apache, BSD-/MIT-style[1] or GPL licenses.

                                                                                                                                                                          • cap11235 2 days ago

                                                                                                                                                                            Sqlite is public domain

                                                                                                                                                                            • overfeed a day ago

                                                                                                                                                                              U can't believe I didn't know this!

                                                                                                                                                                              I got curious on how they solved contributions since "public domain" means different things in different jurisdictions[0] - unlike copyright. It turns out they didn't solve it - there's a subsection on the SQLite that declares it is "Open Source, nor Open Contribution[1]". Much like Android, this follows the letter of Open Source, but not the spirit of it.

                                                                                                                                                                              I'll stand by my earlier assertion; wrangling public domain AI contributions is an even gnarlier problem to solve.

                                                                                                                                                                              0. Indeed, it may even be non-existent. Maintainers would want to protect the project from being "infected" by contributions with permaglued-copyright.

                                                                                                                                                                              1. https://www.sqlite.org/copyright.html

                                                                                                                                                                    • TiredOfLife 2 days ago

                                                                                                                                                                      > Remember when projects were getting overwhelmed by PRs from students just editing a line in a README so they could win a t-shirt? That was 2020, and they weren't using AI.

                                                                                                                                                                      similar thing happened again when a popular educational video demonstrated and called to action to add your name to a popular github repo.

                                                                                                                                                                    • Veedrac 2 days ago

                                                                                                                                                                      > From what I've seen, models have hit a plateau where code generation is pretty good...

                                                                                                                                                                      > But it's not improving like it did the past few years.

                                                                                                                                                                      As opposed to... what? The past few months? Has AI progress so broken our minds as to make us stop believing in the concept of time?

                                                                                                                                                                      • martinald 2 days ago

                                                                                                                                                                        Yes a strange comment. Opus 4.5 is significantly better than before and Opus 4.6 is even better. Same with the 5.2 and 5.3 Codex models.

                                                                                                                                                                        If anything, the pace has increased.

                                                                                                                                                                        This may be one of the most important graphs to keep an eye on: https://metr.org/ and it tracks well to my anecdotal experience.

                                                                                                                                                                        You can see the industry did hit a bit of a wall in 2024 where the improvements drop below the log trend. However, in 2025 the industry is significantly _above_ the trend line.

                                                                                                                                                                        • bamboozled a day ago

                                                                                                                                                                          Are you seeing any meaningful improvements to anything you use though? Like have self-driving cars become really cheap and common place? Medicine improved? Is Netflix giving us an abundance of cheap, really good content to watch? How is your AI doctor?

                                                                                                                                                                          The geeks are telling us the LLMs are great, but that's about it.

                                                                                                                                                                          I'm seeing way more AI generated youtube thumbnails...I know you will say "give it time" but I'm pretty convinced the problems AI solves are not the hard problems required to boost an economy.

                                                                                                                                                                        • mkozlows 2 days ago

                                                                                                                                                                          The wild thing is, that "plateau" link is from September 2025, aka two months before Opus 4.5.

                                                                                                                                                                          Yeah, it's not a plateau.

                                                                                                                                                                          • Aurornis 2 days ago

                                                                                                                                                                            I see these claims in a lot of anti-LLM content, but I’m equally puzzled. The pace of progress feels very fast right now.

                                                                                                                                                                            There is some desire to downplay or dismiss it all, as if the naysayers are going to get their “told you so” moment and it’s just around the corner. Yet the goalposts for that moment just keep moving with each new release.

                                                                                                                                                                            It’s sad that this has turned into a culture war where you’re supposed to pick a side and then blind yourself to any evidence that doesn’t support your chosen side. The vibecoding maximalists do the same thing on the other side of this war, but it’s getting old on both sides.

                                                                                                                                                                            • shabatar a day ago

                                                                                                                                                                              Yeah, I feel that too. It'd be great if people acknowledged the progress without turning it into polarized movements and numerous discussions about how we all lag behind...

                                                                                                                                                                              • bamboozled a day ago

                                                                                                                                                                                What I feel is that people are claiming progress is being made, but on what front ?

                                                                                                                                                                                The machines might be producing more code at a faster rate, but what has that actually amounted too?

                                                                                                                                                                              • hattmall 2 days ago

                                                                                                                                                                                I mean if you take now, from a year ago, vs a year ago from two years ago and then once more vs two years ago to three years ago, you wouldn't see the idea of a plateau in effectiveness or not?

                                                                                                                                                                                I still have several projects I developed in mid 2024 where I felt the AI was really close but not quite good enough for production, and almost two years in they haven't gotten appreciably better to where I would be able to release an actual application.

                                                                                                                                                                              • tibiahurried 2 days ago

                                                                                                                                                                                Internet was a fun place … until they turned into s.. with ads all over. Social media destroyed it.

                                                                                                                                                                                AI is killing creativity and human collaboration; those long nights spent having pizza and coffee while debugging that stubborn issue or implementing yet another 3D engine… now it is all extremely boring.

                                                                                                                                                                                • Aurornis 2 days ago

                                                                                                                                                                                  You can still debug that hobby 3D engine any way you want. Anything you could do 5 years ago you can still do now.

                                                                                                                                                                                  There is an entire new world of people having fun with LLM coding. There are people having fun with social media, too. These people having fun with their thing doesn’t make your thing less fun for you to do.

                                                                                                                                                                                  Let people enjoy things. You can do your own thing and they do theirs. The internet is a big place and there’s room for everyone to find their own way to have fun. If you can’t enjoy your thing because someone else is doing it differently, that’s a you problem.

                                                                                                                                                                                  • RalfWausE 2 days ago

                                                                                                                                                                                    There cannot be coexistence, "AI" needs to be destroyed.

                                                                                                                                                                                    • adithyassekhar 2 days ago

                                                                                                                                                                                      Not really when there's economic incentive and when you need to eat.

                                                                                                                                                                                      • josephg 2 days ago

                                                                                                                                                                                        So you want to have fun coding by hand while also making bank along the way? Yeah, those days seem to be increasingly over.

                                                                                                                                                                                        This is new for us, but it’s not new globally. There used to be professional portrait painters before photography ruined it. Lots of great artists honed their skills and made a living that way. And there were skilled weavers before the loom. Computers (humans who computed things) before the digital computer was invented. And so on. And I’m sure the first photographs don’t look as good as a skilled portrait painting. Arguably they still don’t. But that didn’t save portrait painting as a profession.

                                                                                                                                                                                        We’ll be the same. You can still write code by hand for fun, just like you can paint for fun. I’m currently better at solving problems and writing code than Claude. But Claude is faster than I am, and it’s improving much faster than I am. I think the days of making big money for writing software by hand are mostly over.

                                                                                                                                                                                    • raw_anon_1111 2 days ago

                                                                                                                                                                                      The first banner ad on the web was in 1994. The commercial web has always been ad supported.

                                                                                                                                                                                      Yes I know about Usenet. I was on it in 1992.

                                                                                                                                                                                      • JumpCrisscross 2 days ago

                                                                                                                                                                                        > AI is killing creativity and human collaboration; those long nights spent having pizza and coffee while debugging that stubborn issue

                                                                                                                                                                                        AI is currently designed to be used somewhat antisocially. But nothing stops it from helping a team collaborate. Collaborative vibe working would be a fine place to wind up.

                                                                                                                                                                                        • murphyslaw 2 days ago

                                                                                                                                                                                          Older people like me could say that the Internet was a fun place until AOL came along.

                                                                                                                                                                                          IMO we're going to just have to deal with AI, like it or not.

                                                                                                                                                                                          • GaryBluto 2 days ago

                                                                                                                                                                                            > AI is killing creativity and human collaboration; those long nights spent having pizza and coffee while debugging that stubborn issue or implementing yet another 3D engine… now it is all extremely boring.

                                                                                                                                                                                            One could also say Multi-Drug Therapy killed the solidarity and shared struggle found in leper colonies.

                                                                                                                                                                                          • VerifiedReports 2 days ago

                                                                                                                                                                                            "a writer used hallucinated quotes"

                                                                                                                                                                                            No; FABRICATED quotes. We have a perfectly good, correct word for what's going on.

                                                                                                                                                                                            • cowboylowrez a day ago

                                                                                                                                                                                              no, the word "fabricated" means a deliberate action which could quite possibly have negative connotations for the fabricator, "hallucinated" is something more, its fabrication laundered through an llm.

                                                                                                                                                                                              • VerifiedReports 15 hours ago

                                                                                                                                                                                                Nope. It just means made-up... like your "definition."

                                                                                                                                                                                                Or were you in a fugue state and hallucinating?

                                                                                                                                                                                                • cowboylowrez 9 hours ago

                                                                                                                                                                                                  i'm not a bot ur a bot

                                                                                                                                                                                            • itypecode 2 days ago

                                                                                                                                                                                              AI code review can be useful, but I always review the code it produces, coupled with intentional prompting with detailed tasks.

                                                                                                                                                                                              > But I wouldn't run my production apps—that actually make money or could cause harm if they break—on unreviewed AI code.

                                                                                                                                                                                              I hope no one is actually letting unreviewed code through. AI can, and _will_ make mistakes.

                                                                                                                                                                                              Nowadays > 90% of my code tasks are handled by AI. I still review and guide it to produce what I intended to do myself.

                                                                                                                                                                                              • thunderbong 2 days ago

                                                                                                                                                                                                Looks to me that the issue is with the PR process, not with open-source.

                                                                                                                                                                                                From the article -

                                                                                                                                                                                                > It's gotten so bad, GitHub added a feature to disable Pull Requests entirely. Pull Requests are the fundamental thing that made GitHub popular. And now we'll see that feature closed off in more and more repos.

                                                                                                                                                                                                I don't have a solution for this, I'm pointing to the flaw in the assumption that AI is destroying open-source.

                                                                                                                                                                                                • mycall 2 days ago

                                                                                                                                                                                                  The solution is forking. Make a fork, update it to your heart's content. If it is found to be solid later, perhaps it will be studied and forked itself.

                                                                                                                                                                                                  • JKCalhoun 2 days ago

                                                                                                                                                                                                    Yeah, been thinking that we should let the LLMs run riot on special AI branches— or heck, maybe Microsoft can buy/create AIGitHub.com.

                                                                                                                                                                                                    • foxglacier 2 days ago

                                                                                                                                                                                                      But that's already true. Github lets people make a fork and have their AI run riot on it. What are you really suggesting if not the status quo?

                                                                                                                                                                                                      • JKCalhoun a day ago

                                                                                                                                                                                                        That we embrace it generally. Even just proposing a naming convention would allow for agents to find the AI-sanctioned branch (or create it) and have at it.

                                                                                                                                                                                                        (Maybe some AI agents can collaborate on "AILinux" and we can see how it measures up, ha ha.)

                                                                                                                                                                                                    • esafak 2 days ago

                                                                                                                                                                                                      Now every contributor has a fork. That's bad for consumers. Forks should be temporary.

                                                                                                                                                                                                  • jandrewrogers 2 days ago

                                                                                                                                                                                                    It didn’t take AI to destroy Open Source, we were already doing it to ourselves. LLMs just magnified the existing structural issues and made them even easier to exploit. But the trajectory was already clear.

                                                                                                                                                                                                    • overgard 2 days ago

                                                                                                                                                                                                      Unintentionally mean sounding statement but...

                                                                                                                                                                                                      From my observation, the people that are the most excited about AI are low skilled/unskilled people in that domain. If said people treated AI as a learning tool, everything would be great (I think AI can be a really effective teacher if you're truly motivated to learn). The problem is those people think they "now have the skill", even though they don't. They essentially become walking examples of the Dunning-Kruger effect (the cognitive bias where people with limited knowledge or competence in a particular domain greatly overestimate their own knowledge or competence)

                                                                                                                                                                                                      The problem with being able to produce an artifact that superficially looks like a good product, without the struggle that comes with true learning, is you miss out on all the supporting knowledge that you actually need to judge the quality of the output and fix it, or even the taste to be able to guide the agent in good patterns vs poor patterns.

                                                                                                                                                                                                      I'd encourage people that are obsessed with cutting edge AI and running 5000 Claude agents simultaneously to vibe code a website to take a step back and use the AI to teach them fundamentals. Because if all you can do is prompt, you're useless.

                                                                                                                                                                                                      • cowboylowrez a day ago

                                                                                                                                                                                                        >I think AI can be a really effective teacher if you're truly motivated to learn

                                                                                                                                                                                                        This! I use simple freebie gemini queries and actually read the code produced because thats the actual intent of me asking gemini a question.

                                                                                                                                                                                                        Now that doesn't mean that I mind folks running these vibe coding agents and I bet in the hands of an invested individual aware of risks, some percent of those agents could be tests on results, "devils advocates" type things, like "search for security problems in this code", "break into this system" etc, don't get me wrong I haven't ever actually used agents so my assumptions of what they can do are probably pretty naive lol but theres no denying that neural networks aren't going anywhere.

                                                                                                                                                                                                        My biggest problem with this seem to be slanted more toward the messed up economics and politics of it all which is really just the usual clueless people encountering and using tech from the clued people with a dash of actually psychopathy tossed in for that extra spicy spicy hit. Witness my favorite bad guys suggesting an actual ban on states issuing regulations regarding AI, and then when I see the money being borrowed and the ramifications of budgets required to repay them etc and the fraud, social media AI slop and actual deceptive possiblities that are most certainly being used, well its just all so tiresome

                                                                                                                                                                                                      • silverwind 2 days ago

                                                                                                                                                                                                        I think AI is a huge boon as it reduces the human bottleneck.

                                                                                                                                                                                                        AI is a tool that must to be used well and many people currently raising pull requests seem to think that they don't even need to read the changes which puts unnecessary burden on the maintainers.

                                                                                                                                                                                                        The first review must be by the user who prompted the AI, and it must be thorough. Only then I would even consider raising a PR towards any open source project.

                                                                                                                                                                                                        • maplethorpe a day ago

                                                                                                                                                                                                          I think this is still applying old-school thinking to the problem. The solution to the PR problem is probably something closer to an agentic swarm embedded within the repo, which reviews and approves PRs without any human intervention needed. This way both sides are happy.

                                                                                                                                                                                                        • dtnewman 2 days ago

                                                                                                                                                                                                          Open Source isn't going anywhere. Open Contribution might be on the way out. I built an open source command line tool (https://github.com/dtnewman/zev) that went very minorly viral for a few days last year.

                                                                                                                                                                                                          What I found in the following week is a pattern of:

                                                                                                                                                                                                          1) People reaching out with feature requests (useful) 2) People submitting minor patches that take up a few lines of code (useful) 3) People submitting larger PRs, that were mostly garbage

                                                                                                                                                                                                          #1 above isn't going anywhere. #2 is helpful, especially since these are easy to check over. For #3, MOST of what people submitted wasn't AI slop per se, but just wasn't well thought out, or of poor quality. Or a feature that I just didn't want in the product. In most cases, I'd rather have a #1 and just implement it myself in the way that I want to code organized, rather than someone submitting a PR with poorly written code. What I found is that when I engaged with people in this group, I'd see them post on LinkedIn or X the next day bragging about how they contributed to a cool new open-source project. For me, the maintainer, it was just annoying, and I wasn't putting this project out there to gain the opportunity to mentor junior devs.

                                                                                                                                                                                                          In general, I like the SQLite philosophy of we are open source, not open contribution. They are very explicit about this, but it's important for anyone putting out an open source project that you have ZERO obligation to accept any code or feature requests. None.

                                                                                                                                                                                                          • aethertap 2 days ago

                                                                                                                                                                                                            This comment really hit me - I have a few things I've worked on but never released, and I didn't even realize it was basically because I don't want to deal with all of that extra stuff. Maybe I'll release them with this philosophy.

                                                                                                                                                                                                          • softwaredoug 2 days ago

                                                                                                                                                                                                            Are there maintainers of mature open source projects that can share their AI coding workflow?

                                                                                                                                                                                                            The bias in AI coding discussions heavily skews greenfield. But I want to hear more from maintainers. By their nature they’re more conservative and care about balancing more varied constraints (security, performance, portability, code quality, etc etc) in a very specific vision based on the history of their project. They think of their project more like evolving some foundational thing gradually/safely than always inventing a new thing.

                                                                                                                                                                                                            Many of these issues don’t yet matter to new projects. So it’s hard to really compare the greenfield with a 20 year old codebase.

                                                                                                                                                                                                            • giancarlostoro 2 days ago

                                                                                                                                                                                                              I mean I have grabbed random non-greenfield projects and added features to them for my temporary / personal needs with Claude Code. The key thing is setting it up. The biggest thing is adopting good programming principles like breaking up godclasses. Things that help human devs consume code easier turns out it works for LLMs too.

                                                                                                                                                                                                              • softwaredoug 2 days ago

                                                                                                                                                                                                                I have done this sort of thing too. I’m curious about big, mature projects like numpy or the Linux kernel.

                                                                                                                                                                                                                It seems the users of this are so varied that refactors like what you describe would be rolled out more gradually than the usual AI workflow.

                                                                                                                                                                                                                • giancarlostoro 2 days ago

                                                                                                                                                                                                                  I mean, you could do it, my concern is projects you have total control of that have files that are much larger than your model can hold in context windows, vs if you break up a legacy codebase so more files have more structure, it could work better, the other alternative is to make "map" files like ts has, or like headers in C where its just full of definitions of methods and really short descriptions, basically map out your entire codebase in easier to digest files to let the model find what functions its looking for. I usually have Claude give itself a summary of whats where in the codebase in the instructions.md file so it knows where to go instead of grepping around wasting more tokens.

                                                                                                                                                                                                            • yoasif_ a day ago
                                                                                                                                                                                                              • JumpCrisscross 2 days ago

                                                                                                                                                                                                                > GitHub added a feature to disable Pull Requests entirely. Pull Requests are the fundamental thing that made GitHub popular

                                                                                                                                                                                                                There is a temporary solution. Let maintainers limit PRs to accounts that were created prior to November 30 2022 [1]. These are known-human accounts.

                                                                                                                                                                                                                Down the road, one can police for account transfers and create a system where known-human accounts in good standing can vouch for newer accounts. But for now that should staunch the bleeding.

                                                                                                                                                                                                                [1] https://en.wikipedia.org/wiki/ChatGPT

                                                                                                                                                                                                                • lacunary 2 days ago

                                                                                                                                                                                                                  I've heard that what have to in the past been called spammers create large numbers of fake accounts and then sit on them for years, just to bypass these types of schemes. I guess you could augment with checking for some level of human-like activity before that date.

                                                                                                                                                                                                                • mhitza 2 days ago

                                                                                                                                                                                                                  Hard drives shortage is already old news, CPUs are next.

                                                                                                                                                                                                                  • jongjong 2 days ago

                                                                                                                                                                                                                    My current position is that AI companies should be taxed and the money should be distributed to open source developers.

                                                                                                                                                                                                                    There is a strong legal basis for this to happen because if you read the MIT license, which is one of the most common and most permissive licenses, it clearly states that the code is made available for any "Person" to use and distribute. An AI agent is not a person so technically it was never given the right to use the code for itself... It was not even given permission to read the copyrighted code, let alone ingest it, modify it and redistribute it. Moreover, it is a requirement of the MIT license that the MIT copyright notice be included in all copies or substantial portions of the software... Which agents are not doing in spite of distributing substantial portions of open source code verbatim, especially when considered in aggregate.

                                                                                                                                                                                                                    Moreover, the fact that a lot of open source devs have changed their views on open source since AI reinforces the idea that they never consented to their works being consumed, transformed and redistributed by AI in the first place. So the violation applies both in terms of the literal wording of the licenses and also based on intent.

                                                                                                                                                                                                                    Moreover, the usage of code by AI goes beyond just a copyright violation of the code/text itself; they appropriated ideas and concepts, without giving due credit to their originators so there is a deeper ethical component involved that we don't have a system to protect human innovation from AI. Human IP is completely unprotected.

                                                                                                                                                                                                                    That said, I think most open source devs would support AI innovation, but just not at their expense with zero compensation.

                                                                                                                                                                                                                    • foxglacier 2 days ago

                                                                                                                                                                                                                      > they appropriated ideas and concepts, without giving due credit to their originators so there is a deeper ethical component

                                                                                                                                                                                                                      No there isn't. We're all free to copy each other's ideas and concepts and not give any credit to their "originators" who aren't usually even the first people to think of them but just the previous person in the chain of copying ideas. That's how progress happens. No we should not inhibit our use of knowledge because every idea "belongs" to somebody.

                                                                                                                                                                                                                      I'm not talking about copyright here, which is different and doesn't usually protect ideas and concepts anyway, at least none that are useful.

                                                                                                                                                                                                                      • jongjong 2 days ago

                                                                                                                                                                                                                        That's why I alluded to the fact that this was more of an ethical matter than a legal matter. Though it should be a legal matter. It's just hard to measure, for the reasons you suggested... Doesn't mean we shouldn't try to approximate something fairer.

                                                                                                                                                                                                                        We've crossed a threshold whereby economic value creation is not fairly rewarded. The economy became a kind of winner-takes-all game of who can convince people to pay for stuff and lock them in first... Or who can wedge themselves first between large pre-existing corporate money flows.

                                                                                                                                                                                                                        It's like the office politics, bureaucracy and corruption that everyone hates has become the core reward mechanism of the economy. It was never designed that way but a combination of factors exacerbated by underlying system flaws and perverse incentives got us there.

                                                                                                                                                                                                                        There's already way too much false advertising. The winners of this game are those who can sell a dream . It doesn't matter if they don't deliver because by the time people figure it out, they already sold their startup and onto other things. Everyone is kept in a constant state of chasing the next big thing and it doesn't solve any problems. Human potential is just wasted on creating elaborate illusions which ultimately satisfy no one.

                                                                                                                                                                                                                    • jockm 2 days ago

                                                                                                                                                                                                                      I am curious, do we have any confirmation that the "AI hit piece" thing is real? It feels like everyone is just assuming it is, but it would be nice to see some confirmation.

                                                                                                                                                                                                                      Additionally Geerling raises good points, but I am not sure we should jump to his conclusion yet.

                                                                                                                                                                                                                      • drob518 2 days ago

                                                                                                                                                                                                                        > And they say "this time it's different", but it's not.

                                                                                                                                                                                                                        It never is. You know you’ve hit peak bubble when everyone you know is investing in the new hotness and saying, “This time it’s different.” When that happens, get ready to short the market.

                                                                                                                                                                                                                        • cagz 2 days ago

                                                                                                                                                                                                                          Low-quality AI-created PRs that are submitted to open-source repositories are prompted by humans. And those are the same humans who fails to review AI's output properly before submitting (or letting AI submit) as PRs. Let's not blame the tools instead of bad workmanship.

                                                                                                                                                                                                                          A smaller number of PRs generated by OpenClaw-type bots are also doing so based on their owner's direct or implied instructions. I mean, someone is giving them GitHub credentials and letting them loose.

                                                                                                                                                                                                                          AI is also allowing the creation of many new open-source projects, led by responsible developers.

                                                                                                                                                                                                                          Given the exponential speed at which AI is progressing, surely the quality of such PRs is going to improve. But there are also opportunities for the open-source community to improve their response. It will sound controversial, but AI can be used to perform an initial review of PRs, suggest improvements, and, in extreme cases, reject them.

                                                                                                                                                                                                                          • 1970-01-01 2 days ago

                                                                                                                                                                                                                            We all know the solution will be yet another AI agent reviewing the reputation of the pull requests from the public and rating them. This even seems like an easy win for Microsoft and GitHub. Just make it already.

                                                                                                                                                                                                                            • _heimdall 2 days ago

                                                                                                                                                                                                                              Hello, social credit scores.

                                                                                                                                                                                                                              • 1970-01-01 a day ago

                                                                                                                                                                                                                                You would never frame it like that. Think more like eBay buyer and seller ratings for GitHub pull requests. Seems very obvious. The reputation system doesn't involve money of any kind, but a number rating the overall smoothness of transaction of a pull request from the perspective of both sides.

                                                                                                                                                                                                                                • overgard 2 days ago

                                                                                                                                                                                                                                  What could go wrong..

                                                                                                                                                                                                                                  I wonder if anyone in tech ever watches Black Mirror (other than the AI zealots that seem to watch it and go "SO MANY GREAT IDEAS!")

                                                                                                                                                                                                                                  • duskdozer a day ago

                                                                                                                                                                                                                                    Why would I ever need to study humanities or ethics? I just want to build cool stuff!

                                                                                                                                                                                                                              • brainless 2 days ago

                                                                                                                                                                                                                                More people are jumping in because of the thrill of it.

                                                                                                                                                                                                                                We are in the early days and I believe that things will get better as more people will calm the f down. People who have built things for ages will continue to do so, with or without coding agents.

                                                                                                                                                                                                                                In the long term, I think Open Source will win. I can imagine content management systems, eCommerce software, CRM, etc. to all become coding agent friendly - customer can customize the core software with agents and the scaffold would provide fantastic guardrails.

                                                                                                                                                                                                                                Self-hosting is already becoming way more popular than it ever was. People are downloading all sorts of tools to build software. Building is better. A structure needs to emerge.

                                                                                                                                                                                                                                • fraaancis a day ago

                                                                                                                                                                                                                                  In the problem is the solution. Open source maintainers start writing AGENTS.md and other files to guide those using LLMs, and future nerd sniping and elitism can be practiced in reference to the spec instead of the actual code. There can just be an agent template for maintaining or adding features to curl or projects like it.

                                                                                                                                                                                                                                  • VorpalWay 2 days ago

                                                                                                                                                                                                                                    This one is probably going to be controversial. But I feel highlighting the drawbacks are also important, not just the benefits.

                                                                                                                                                                                                                                    • stickynotememo 2 days ago

                                                                                                                                                                                                                                      It's quite unfortunate that this has to be controversial.

                                                                                                                                                                                                                                    • KnuthIsGod 2 days ago

                                                                                                                                                                                                                                      Science fiction magazines like Clarkesworld are being innundated with really terrible AI generated stories.

                                                                                                                                                                                                                                      • anilgulecha 2 days ago

                                                                                                                                                                                                                                        Prior to LLM the concept of "Open Source" could co-exist with "Free Software" - one was a more pragmatic view of how to develop software, the other a political activist position of how code powering our world should be.

                                                                                                                                                                                                                                        AI has laid bare the difference.

                                                                                                                                                                                                                                        Open Source is significantly impacted. Business models based on it are affected. And those who were not taking the political position find that they may not prefer the state of the world.

                                                                                                                                                                                                                                        Free software finds itself, at worst, a bit annoyed (need to figure out the slop problem), and at best, an ally in AI - the amount of free software being built right now for people to use is very high.

                                                                                                                                                                                                                                        • tjr 2 days ago

                                                                                                                                                                                                                                          I’ve seen different opinions. Can LLM-generated software be licensed under the GPL?

                                                                                                                                                                                                                                          • bonoboTP 2 days ago

                                                                                                                                                                                                                                            Your question has nothing to do with the GPL. If your concern is that the code may count as derivative work of existing code then you also can't use that code in a proprietary way, under any license. But that probably only applies if the LLM regurgitated a substantial amount of copyrighted code into your codebase.

                                                                                                                                                                                                                                            • tjr 2 days ago

                                                                                                                                                                                                                                              Fair; that was an example instance. People interested in “Free software” rather than “open source” seem to often favor the GPL, though other licensing options also count as “free software”.

                                                                                                                                                                                                                                              But in any case, the question really refers to, can the LLM-generated software be copyrighted? If not, it can’t be put under any particular license.

                                                                                                                                                                                                                                              • bonoboTP 2 days ago

                                                                                                                                                                                                                                                Is your concern the potential for plagiarism or the lack of creative input from the human? If the latter, it would depend on how much intellectual input was needed from the human to steer the model, iterate on the solution etc.

                                                                                                                                                                                                                                            • hcayless 2 days ago

                                                                                                                                                                                                                                              If it can’t be copyrighted, then no. Licenses rely on the copyright holder’s right to grant the license. But that would also mean it’d be essentially public domain. I’m not sure there’s really settled legal opinion on this yet. Iirc it can’t be patented.

                                                                                                                                                                                                                                              • anilgulecha 2 days ago

                                                                                                                                                                                                                                                Can you link to them?

                                                                                                                                                                                                                                                The way the world is currently working is code created by someone (using AI) is being dealt with as if it was authored by that someone. This is across companies and FOSS. I think it's going to settle with this pattern.

                                                                                                                                                                                                                                            • dennysora 2 days ago

                                                                                                                                                                                                                                              I’m genuinely unsure how to assess the current state of open source. Many projects have been AI-assisted in one way or another. Meanwhile, a large number of people use AI to generate code indiscriminately and publish it as open source. As a result, using open-source contributions as a proxy for someone’s engineering ability has become increasingly unreliable. In addition, many developers now prefer to build their own solutions rather than rely on whether an open-source alternative exists. And for many small open-source projects, frankly, I hesitate to use them—given how prevalent malicious software has become, if it’s feasible to build it in-house, I’d rather do that.

                                                                                                                                                                                                                                              • PaulDavisThe1st 2 days ago

                                                                                                                                                                                                                                                From my POV (30 or so years working on the same FLOSS project), AI isn't "destroying Open Source" through an effect on contributions. It is, however, destroying open source through its ceaseless, relentless, unabatable crawling of open source git repositories, commit by commit rather than via git-clone(1).

                                                                                                                                                                                                                                                Project after project reports wasted time, increased hosting/bandwidth bills, and all around general annoyance from this UTTER BULLSHIT. But every morning, we wake up, and its still there, no sign of it ever stopping.

                                                                                                                                                                                                                                              • Razengan 2 days ago

                                                                                                                                                                                                                                                If you see my previous comments on the matter I absolutely don't trust AI for generating any code, but in the past couple days I've come to appreciate it for reviewing code.

                                                                                                                                                                                                                                                I'm the sole maintainer for a gamedev "middleware" open source project for Godot, and all AIs have been generally crap about Godot stuff and frequently getting it wrong, but Codex helped me catch some future bugs that could have caused hard to spot mysterious behavior and a lot of head scratching.

                                                                                                                                                                                                                                                I don't dare let it edit anything but I look at its suggestions and implement them my way. Of course it's still wrong sometimes, if I trusted it blindly I would be f'ed. A few times I had to repeatedly tell it about how some of its findings were incorrect or the intended behavior, until it relented with "You're right. My assumption was based on..."

                                                                                                                                                                                                                                                Also, while I would [probably] never let AI be the source of any of my core code, it's nice for experiments and what-ifs: since my project is basically a library of more-or-less standalone components, it's actually more favorable for AI, to wire them together like prebuilt Lego blocks: I can tell it to "make a simple [gameplay genre] scene using existing components only, do not edit any code" and it lets me spot what's missing from the library.

                                                                                                                                                                                                                                                In the end this too is a tool like everything else. I've always wanted to make games but I've always been sidetracked by "black hole" projects like trying to make engines and frameworks without ever actually making an actual full game, and I think it's time to welcome anything that helps me waste less time on the stuff that isn't an actual game :)

                                                                                                                                                                                                                                                • truncate 2 days ago

                                                                                                                                                                                                                                                  Three patterns I've noticed on the open-source projects I've worked on:

                                                                                                                                                                                                                                                  1. AI slop PRs (sometimes giant). Author responds to feedback with LLM generated responses. Show little evidence they actually gave any thought of their own towards design decisions or implementation.

                                                                                                                                                                                                                                                  2. (1) often leads me to believe they probably haven't tested it properly or thought of edge cases. As reviewer you now have to be extra careful about it (or just reject it).

                                                                                                                                                                                                                                                  3. Rise in students looking for job/internship. The expectation is that LLM generated code which is untested will give them positive points as they have dug into the codebase now. (I've had cases where they said they haven't tested the code, but it should "just work").

                                                                                                                                                                                                                                                  4. People are now even more lazy to cleanup code.

                                                                                                                                                                                                                                                  Unfortunately, all of these issues come from humans. LLMs are fantastic tools and as almost everyone would agree they are incredibly useful when used appropriately.

                                                                                                                                                                                                                                                  • MBCook 2 days ago

                                                                                                                                                                                                                                                    > Unfortunately, all of these issues come from humans.

                                                                                                                                                                                                                                                    They are. They’ve always been there.

                                                                                                                                                                                                                                                    The problem is that LLMs are a MASSIVE force multiplier. That’s why they’re a problem all over the place.

                                                                                                                                                                                                                                                    We had something of a mechanism to gate the amount of trash on the internet: human availability. That no longer applies. SPAM, in the non-commercial sense of just noise that drowns out everything else, can now be generated thousands of times faster than real content ever could be. By a single individual.

                                                                                                                                                                                                                                                    It’s the same problem with open source. There was a limit to the number of people who knew how to program enough to make a PR, even if it was a terrible one. It took time to learn.

                                                                                                                                                                                                                                                    AI automated that. Now everyone can make massive piles of complicated plausible looking PRs as fast as they want.

                                                                                                                                                                                                                                                    To whatever degree AI has helped maintainers, it is not nearly as an effective a tool at helping them as it is helping others generate things to waste their time. Intentionally or otherwise.

                                                                                                                                                                                                                                                    You can’t just argue that AI can be a benefit therefore everything is fine. The externalities of it, in the digital world, are destroying things. And even if we develop mechanisms to handle the incredible volume will we have much of value left by the time we get there?

                                                                                                                                                                                                                                                    This is the reason I get so angry at every pro AI post I see. They never seem to discuss the possible downsides of what they’re doing. How it affects the whole instead of just the individual.

                                                                                                                                                                                                                                                    There are a lot of people dealing with those consequences today. This video/article is an example of it.

                                                                                                                                                                                                                                                    • Nition 2 days ago

                                                                                                                                                                                                                                                      > Unfortunately, all of these issues come from humans.

                                                                                                                                                                                                                                                      I've been thinking about this recently. As annoying as all the bots on Twitter and Reddit are, it's not bots spinning up bots (yet!), it's other humans doing this to us.

                                                                                                                                                                                                                                                      • TOMDM 2 days ago

                                                                                                                                                                                                                                                        > it's not bots spinning up bots (yet!)

                                                                                                                                                                                                                                                        Well, some of them are, but the bots bot is spun up by a human (or maybe bot n+1)

                                                                                                                                                                                                                                                        • Nition 2 days ago

                                                                                                                                                                                                                                                          Great bots have little bots, if one should deign to write 'em

                                                                                                                                                                                                                                                          And little bots have lesser bots, and so ad infinitum...

                                                                                                                                                                                                                                                      • _--__--__ 2 days ago

                                                                                                                                                                                                                                                        If only I were lucky enough to get LLM generated responses, usually a question like "Did you consider if X would also solve this problem?" results in a flurry of force pushed commits that overwrite history to do X but also 7 other unrelated things that work around minor snags the LLM hit doing X.

                                                                                                                                                                                                                                                        • kerkeslager 2 days ago

                                                                                                                                                                                                                                                          I've got a few open source projects out there, and I've almost never received any PRs for them until AI, simply because they were things I did for myself and never really promoted to anyone else. But now I'm getting obviously-AI PRs on a regular basis. Somehow people are using AI to find my unpromoted stuff and submit PRs to it.

                                                                                                                                                                                                                                                          My canned response now is to respond, "Can you link me to the documentation you're using for this?" It works like a charm, the clanker doesn't ever respond.

                                                                                                                                                                                                                                                        • softwaredoug 2 days ago

                                                                                                                                                                                                                                                          What if we choose to see this as an “always fork” open source ecosystem.

                                                                                                                                                                                                                                                          I mean I don’t want you sending PRs to my vibe coded project, but I also don’t care if you fork it to make useful for your needs

                                                                                                                                                                                                                                                          We’ve been so worried about the burden of forking in the past - maybe that should change?

                                                                                                                                                                                                                                                          • amarant 2 days ago

                                                                                                                                                                                                                                                            We need to have a catchier term for AI assisted coding, so that we may easily distinguish it from Vibe coding slop.

                                                                                                                                                                                                                                                            Using AI to find relevant parts of a codebase, help you remember stuff like which annotations a data class needs for dB persistence(yes I'm a Java server dev, hi!) is awesome. Having Claude solo dev an application based on a prompt generated by gpt is something else entirely(pretty fun, but not very useful for anything more complicated than mega-trivial)

                                                                                                                                                                                                                                                            Open claw is like the third level to this that also exists for some reason.

                                                                                                                                                                                                                                                            • keernan 2 days ago

                                                                                                                                                                                                                                                              >>... Crypto ... are [is] pretty much useless.

                                                                                                                                                                                                                                                              Other than by corrupt criminals and mafia types who have a need to covertly hide cash.

                                                                                                                                                                                                                                                              And then the current administration wants the government to 'protect' crypto investors against big losses. Gotta love it.

                                                                                                                                                                                                                                                              • nyc_data_geek1 2 days ago

                                                                                                                                                                                                                                                                And anyone who is under sanction or lives in a nation under economic sanction, and wants access to a means of sending payments across borders that would otherwise be closed to them.

                                                                                                                                                                                                                                                                And anyone who lives in a polity whose local currency may be undergoing rapid devaluation/inflation.

                                                                                                                                                                                                                                                                And anyone who needs a form of wealth that no local authority is technically capable of alienating them from - ie: if you need to pack everything in a steamer trunk to escape being herded into cattle cars, you can memorize a seed phrase and no one can stop you from taking your wealth with you.

                                                                                                                                                                                                                                                                And any polity who may no longer wish to use dollars as the international lingua franca of trade, as the global foreign exchange reserve currency, to reduce the degree to which their forex reserves prop up American empire.

                                                                                                                                                                                                                                                                Sadly, all of these use cases appear increasingly relevant as time goes on.

                                                                                                                                                                                                                                                                • keernan 2 days ago

                                                                                                                                                                                                                                                                  ok - I am willing to be educated. Thank you.

                                                                                                                                                                                                                                                                  • nyc_data_geek1 a day ago

                                                                                                                                                                                                                                                                    You're welcome. Glad you found this helpful, truthfully. Rare enough for any signal to cut through the noise online, these days.

                                                                                                                                                                                                                                                                • henry2023 2 days ago

                                                                                                                                                                                                                                                                  >> Other than by corrupt criminals and mafia types who have a need to covertly hide cash.

                                                                                                                                                                                                                                                                  I’ve got an Argentinian friend who sends crypto to his mother because he pays less than 0.5 % in fees and exchange rates instead of close to 5% using the traditional way. From now on I’ll call him a corrupt criminal.

                                                                                                                                                                                                                                                                  • keernan 2 days ago

                                                                                                                                                                                                                                                                    No need to be snarky. I didn't realize there actually were any legitimate reasons to own crypto.

                                                                                                                                                                                                                                                                    • foxglacier 2 days ago

                                                                                                                                                                                                                                                                      Really? For such an obvious use case, it sounds like everything you know about the topic is what you've heard from emotionally manipulative social media users. You should work out ideas yourself instead of just copying popular rhetoric which is often wrong, despite being popular.

                                                                                                                                                                                                                                                                  • the-anarchist 2 days ago

                                                                                                                                                                                                                                                                    > corrupt criminals and mafia types who have a need to covertly hide cash

                                                                                                                                                                                                                                                                    You're describing the people that use actual cash to launder and hide, well, cash, and that have done so for centuries, long before crypto had even been invented.

                                                                                                                                                                                                                                                                    A few web searches on <big bank name> + "money laundering scandal" (e.g. "HSBC money laundering scandal") can offer valuable insights.

                                                                                                                                                                                                                                                                    • keernan 2 days ago

                                                                                                                                                                                                                                                                      >> that have done so for centuries

                                                                                                                                                                                                                                                                      There is no doubt crypto processes trillions of dollars of illegal cash. Way easier for the illegal cash industry to wash their cash than ever before.

                                                                                                                                                                                                                                                                      • mulmen 2 days ago

                                                                                                                                                                                                                                                                        How does crypto make money laundering at scale easy?

                                                                                                                                                                                                                                                                    • RalfWausE 2 days ago

                                                                                                                                                                                                                                                                      Corrupt criminals and mafia types... a well fitting description of the US government

                                                                                                                                                                                                                                                                      • beeflet 2 days ago

                                                                                                                                                                                                                                                                        what about non-corrupt criminals?

                                                                                                                                                                                                                                                                      • wqtz 2 days ago

                                                                                                                                                                                                                                                                        I do not understand what bubble even means, and I do not think the developer influencers do either.

                                                                                                                                                                                                                                                                        Was NFT or Crypto a bubble? The idea of a bubble means that it "pops" in a dramatic fashion. NFT prices in aggregate faded slowly, and the impact it has only applies to a handful of individuals. Moreover, the behavior we have seen with crypto and nft can largely be speculated that the purpose was largely illicit financial engineering.

                                                                                                                                                                                                                                                                        If a handful of bad PRs "are destroying open source," Open Source as a concept is surprisingly in a vulnerable project. No project worth its salt ever integrates unverifiable PRs. No valid OSS ever integrates uninvited PRs in the first place. Every PR is driven by an issue or a very robust that is specific description. Any project that receives an "unsolicited" PR does not make the project maintainer yell "Oh, I am ruined."

                                                                                                                                                                                                                                                                        I have stopped checking out these programming content videos for the last year or so. But I stupidly did it here. Every single channel has become like Coffeezilla with an agenda, being AI as a catalyst of great harm.

                                                                                                                                                                                                                                                                        • mcphage 2 days ago

                                                                                                                                                                                                                                                                          > Open Source as a concept is surprisingly in a vulnerable project

                                                                                                                                                                                                                                                                          Yes, that’s been a known problem for a while. This comic: https://xkcd.com/2347/ is a popular illustration of the problem from 2020, but the problem itself was well known before that.

                                                                                                                                                                                                                                                                        • periodjet 2 days ago

                                                                                                                                                                                                                                                                          I enjoy letting the air out of hype bubbles as much as the next guy, but this kind of article is an example of the OTHER side of this particular undesirable coin: needless doomerism for the purposes of attention and virtue farming. Miss me with it.

                                                                                                                                                                                                                                                                          • foxglacier 2 days ago

                                                                                                                                                                                                                                                                            Yep. He's just repeating the same popular opinions you hear every day, and with just as much lack of reasoning and evidence to support them as every other AI doomer. He even has the same standard opinion of cryptocurrency that goes along with it. I suppose this sort of post acts as a summary of popular opinion but not as a source of useful ideas about the topic itself.

                                                                                                                                                                                                                                                                          • EZ-E 2 days ago

                                                                                                                                                                                                                                                                            Maybe some type of reputation system could help. ie: "karma" but for github. Increases whenever you make good contributions that get merged, decreased if you submit slop that gets rejected.

                                                                                                                                                                                                                                                                            • zzo38computer 20 hours ago

                                                                                                                                                                                                                                                                              One problem with it is that a good contribution might be used even if it cannot be merged directly for whatever reason (although they might be listed as a co-author in that case).

                                                                                                                                                                                                                                                                            • jazz9k 2 days ago

                                                                                                                                                                                                                                                                              "If this was a problem already, OpenClaw's release, and this hiring by OpenAI to democratize agentic AI further, will only make it worse. Right now the AI craze feels the same as the crypto and NFT boom, with the same signs of insane behavior and reckless optimism."

                                                                                                                                                                                                                                                                              There are definitely people abusing AI and lying about what it can actually do. However, Crypto and NFTs are pretty much useless. Many people (including me) have already increased productivity using LLMs.

                                                                                                                                                                                                                                                                              This technology just isn't going away.

                                                                                                                                                                                                                                                                              • ggm 2 days ago

                                                                                                                                                                                                                                                                                The underlying tech in a signed public ledger, using chaining methods and merkle trees to record things with non-repudiation, thats useful. I went to a meeting which included people from the Reserve Bank of Australia or the financial regulator, and they said that between nation states, settlement was about mutuality, and absent a regulator to tell you what to do, federated processes around things like this were entirely rational choices. Nothing whatsoever about Bitcoin, Ethereum, the hype machine, rug pulls, just the underlying tech using normal PKI and some data structures and HSM backed processes. The regulator said informally, in a regulated monopoly agreeing to use mutual (dis)trust methods like a chain would be acceptable as an audit method to the regulator. Nothing about you or me, nothing about hype. Mechanistic settlement methods amongst competitors in a reasonably transparent manner.

                                                                                                                                                                                                                                                                                Some payment chains are painful. An awful lot of middlemen take a cut. Some payment chains impose burdens on the endpoint like 90 day settlement debts which could be avoided with some use of tech. Nothing about the hype, just modification to financial transactions, but they could be done other ways as well (as could the settlement ideas above)

                                                                                                                                                                                                                                                                                NFT are the same logic as bearer bonds. They're useful to very specific situations of value transfer, and almost nothing else. The use isn't about the artwork on the front, its the posession of a statement of value. Like bonds, they get discounted. The value is therefore a function of the yield and the trust in the chain of sigs asserting its a debt to that value. Not identical, but the concept stripped of the ego element isn't that far off.

                                                                                                                                                                                                                                                                                Please note I think bored ape and coins are stupid. I am not attempting to promote the hype.

                                                                                                                                                                                                                                                                                AI is the same. LLMs are useful. There are functional tools in this. The sheer amount of capital being sunk into venture plays is however, disconnected from that utility.

                                                                                                                                                                                                                                                                                • Terr_ 2 days ago

                                                                                                                                                                                                                                                                                  Half-agree: "Blockchain" systems contain new and useful technology, but the useful technology is not new, and the new technology is not-so-useful. If we keep the useful stuff, we're basically back at regular old distributed databases.

                                                                                                                                                                                                                                                                                  The key blockchain requirement is allowing unrestricted node membership. From that flows a dramatic explosion of security issues, performance issues, and N-level deep workarounds.

                                                                                                                                                                                                                                                                                  In the case of a bunch of banks trying to keep each other honest, it's drastically simpler/faster/cheaper to allocate a certain number of fixed nodes to be run by different participants and trusted outside institutions.

                                                                                                                                                                                                                                                                                  One doesn't need to trust every node, just the a majority is unlikely to be suborned, and you'll know in advance which majorities are possible. The bank in Australia probably doesn't want or need to shift some of that responsibility outside the group, onto literally anybody who shows up with some computing power.

                                                                                                                                                                                                                                                                                  • ggm 2 days ago

                                                                                                                                                                                                                                                                                    That's fair.

                                                                                                                                                                                                                                                                                  • akoboldfrying 2 days ago

                                                                                                                                                                                                                                                                                    An analogy for cryptocurrency that I like is lasers. I remember reading an Usborne book about lasers as a kid and thinking they were the coolest thing ever and would doubtless find their way into every technology because glowing beams of pure light energy, how could they not transform the world!

                                                                                                                                                                                                                                                                                    Lasers turn out to be useful for... eye surgery, and pointing at things, and reading bits off plastic discs, and probably a handful of other niche things. There just aren't that many places where what they can do is actually needed, or better than other more pedestrian ways of accomplishing the same thing. I think the same is true of crypto, including NFTs.

                                                                                                                                                                                                                                                                                    • ggm 2 days ago

                                                                                                                                                                                                                                                                                      In 1982 I joined Leeds university as a computer operator/assistant. First job out of a CS degree in another uni. The department head was a laser physics specialist and I, and other snobs used to say "what does a laser man know about REAL computer science" and act outraged.

                                                                                                                                                                                                                                                                                      More fool us. More power to him. He was well ahead of the curve, him and his laser physics friends worldwide.

                                                                                                                                                                                                                                                                                  • shimman 2 days ago

                                                                                                                                                                                                                                                                                    I don't like the weasel word "democratize" because there is nothing democratic about being forced to use a tool on condition of keeping your job. Democratization goes both ways, if you can't destroy something you cannot truly control it; I'm sure if you put it to an actual vote, many people would be surprised at the results.

                                                                                                                                                                                                                                                                                    • bsza 2 days ago

                                                                                                                                                                                                                                                                                      It doesn't have to go away, it just needs to be better regulated. I could also increase my productivity by taking Adderall, if that was my end goal. But most people don't, since there are other factors to take into consideration, like becoming unable to function without it, or long-term cognitive decline...

                                                                                                                                                                                                                                                                                      • kace91 2 days ago

                                                                                                                                                                                                                                                                                        The tech isn’t going away, but its usability is probably going to be recalibrated once we factor in the long term danger ( effects on learning and acquiring/maintaining skills, maintenance costs of ai made code, etc).

                                                                                                                                                                                                                                                                                        We’ll still have the “best code tooling ever invented” stuff, but if the market is assuming “intellectual workers all replaced”, there’s still a bubble pop waiting for us.

                                                                                                                                                                                                                                                                                        • verdverm 2 days ago

                                                                                                                                                                                                                                                                                          You should check out HN /new and /show the last couple of weeks.

                                                                                                                                                                                                                                                                                          It's just like all the ICO, NFT, and other crypto launches, but for all the little things that you can do with Ai. Everybody or their bot has some new game changing Ai project. It's a tiring mess right now, which I do hope will similarly die down in time

                                                                                                                                                                                                                                                                                          For clarity, I was a big fan of blockchain before it got bad, still am for things like ZKP and proof-of-authority, and I am similarly very excited for what Ai enables, but (imo) one cannot easily argue there is not a spam problem that feels similar.

                                                                                                                                                                                                                                                                                          • BoneShard 2 days ago

                                                                                                                                                                                                                                                                                            Check LinkedIn, it's like HN times 100.

                                                                                                                                                                                                                                                                                            • verdverm 2 days ago

                                                                                                                                                                                                                                                                                              I had heard, but then omg, I was on there for the first time since and scrolled the feed for a few flicks just to see

                                                                                                                                                                                                                                                                                        • benreesman 2 days ago

                                                                                                                                                                                                                                                                                          Monetization destroyed open source. Agent code made the bankruptcy legible.

                                                                                                                                                                                                                                                                                          Open source software was trivially better in the nineties because it was done by people who would have and often did do it for free. Those people are better by simp.

                                                                                                                                                                                                                                                                                          The people bitching about it now didn't push back when it unified on a forge, or when it sold to Microsoft, or when it started working in like button stars.

                                                                                                                                                                                                                                                                                          They're bitching now that their grift is up.

                                                                                                                                                                                                                                                                                          • zer00eyz 2 days ago

                                                                                                                                                                                                                                                                                            The house is poorly put together cause the carpenter used a cheap nail gun and a crappy saw.

                                                                                                                                                                                                                                                                                            LLMs are confidently wrong and make bad engineers think they are good ones. See: https://en.wikipedia.org/wiki/Dunning–Kruger_effect

                                                                                                                                                                                                                                                                                            If you're a skilled dev, in an "common" domain, an LLM can be an amazing tool when you integrate it into your work flow and play "code tennis" with it. It can change the calculus on "one offs", "minor tools and utils" and "small automations" that in the past you could never justify writing.

                                                                                                                                                                                                                                                                                            Im not a Lawyer, or a Doctor. I would never take legal advice or medical advice from an LLM. Im happy to work with the tool on code because I know that domain, because I can work with it, and take over when it goes off the rails.

                                                                                                                                                                                                                                                                                            • bobpaw 2 days ago

                                                                                                                                                                                                                                                                                              It is hard to test LLM legal/medical advice without risk of harm, but it is often exceedingly easy to test LLM generated code. The most aggravating thing to me is that people just don't. I think the best thing we can do is to encourage everyone who uses/trusts LLMs to test and verify more often.

                                                                                                                                                                                                                                                                                            • deafpolygon 2 days ago

                                                                                                                                                                                                                                                                                              Most of LLMs (“AI”) is trained on our public code. Including the bad ones.

                                                                                                                                                                                                                                                                                              It’s not cleverly generating new code; it’s just re-arranging code that it’s already seen. So, naturally, its usefulness is starting to plateau. The bulk of the improvements we’ll see from here on out will be better adaptability to specific applications.

                                                                                                                                                                                                                                                                                              It’s not destroying open source — but rather, making it accessible to everyone. That includes those that don’t understand it and people who have never had to go through the hazing that OSS culture tends to put newly inducted members through (“rtfm”, “benevolent dictators”, what have you, etc).

                                                                                                                                                                                                                                                                                              So without that culture of exclusive membership (by hazing), oss is now overwhelmed. It’s going to take time for the dust to settle from the stampede, and what’s left will be those who care about the craft and the art of software development. I liken it to what photography did to art, and how art has shifted.

                                                                                                                                                                                                                                                                                              One thing that LLMs will be really great for is accelerating learning. It’s now possible to tailor the output to suit individual needs even greater than before. I’m rather excited to see the possibilities of LLMs in the education space.

                                                                                                                                                                                                                                                                                              • mifydev 2 days ago

                                                                                                                                                                                                                                                                                                Frankly, I don't like this kinds of takes. Yes, people are seeing more spam in their pull requests, but that's just what it is - spam that you need to learn how to filter. For regular engineers who can use AI, it's a blessing.

                                                                                                                                                                                                                                                                                                I'm a long time linux user - now I have more time to debug issues, submit them, and even do pull requests that I considered too time consuming in the past. I want and I can now spend more time on debugging Firefox issues that I see, instead of just dropping it.

                                                                                                                                                                                                                                                                                                I'm still learning to use AI well - and I don't want to submit unverified slop. It's my responsibility to provide a good PR. I'm creating my own projects to get the hang of my setup and very soon I can start contributing to existing projects. Maintainers on the other hand need to figure out how to pick good contributors on scale.

                                                                                                                                                                                                                                                                                                • sarchertech 2 days ago

                                                                                                                                                                                                                                                                                                  Well that’s the problem. AI is really good at making things that bypass people’s heuristics for spam.

                                                                                                                                                                                                                                                                                                  Someone can spam me with more AI slop than I can vet and it can pass any automated filter I can setup.

                                                                                                                                                                                                                                                                                                  The solution is probably closed contributions because figuring out good contributors at scale sounds like figuring out how to hire at scale, which we are horrible at as an industry.

                                                                                                                                                                                                                                                                                                • pvillano 2 days ago

                                                                                                                                                                                                                                                                                                  AI training is information theft. AI slop is information pollution.

                                                                                                                                                                                                                                                                                                  • pvillano 2 days ago

                                                                                                                                                                                                                                                                                                    Search feels like fishing in an ocean of floating plastic.

                                                                                                                                                                                                                                                                                                    Social media feels like parks smothered with smog.

                                                                                                                                                                                                                                                                                                    It makes you stupid like leaded gas.

                                                                                                                                                                                                                                                                                                    We'll probably be stuck with it forever, like PFAS

                                                                                                                                                                                                                                                                                                  • notepad0x90 2 days ago

                                                                                                                                                                                                                                                                                                    This is largely reactionary and false on both counts.

                                                                                                                                                                                                                                                                                                    AI has been good for years now. Good doesn't mean perfect. it doesn't mean flawless. It doesn't mean the hype is spot-on. good means exactly that, it is good at what is intended to do.

                                                                                                                                                                                                                                                                                                    It is not destroying open source either. If anything, there would be more open source contributors using AI to create code.

                                                                                                                                                                                                                                                                                                    You can call anything done by AI "slop" but that doesn't make it so.

                                                                                                                                                                                                                                                                                                    Daniel and the curl project were also over reacting. A reaction was warranted, but there were many measures they could have taken before shutting down bug reporting entirely.

                                                                                                                                                                                                                                                                                                    If you replace "AI" with "junior dev", "troll" , "spammer", what would things be like then? If it is scale, you can troll, spam and be incompetent at scale just fine without the help of AI.

                                                                                                                                                                                                                                                                                                    It's gatekeeping and sentimentality amplified.

                                                                                                                                                                                                                                                                                                    I can't wait for people who call everything slop to be overshadowed by people who are so used to LLMs that their usage isn't different than using a linter, a compiler, an IDE, just another tool good at certain tasks but not others. abusable, but with reasonable mitigations possible.

                                                                                                                                                                                                                                                                                                    I keep reading posts about what open source users are owed and not owed. Github restricting PRs, developers complaining about burnouts. Have you considered using AI "slop" instead? give a slop response to what you consider to be a slop request? Oh, but no, you could never touch "AI", that would stain you! (I speak to the over-reactors). You don't need AI, you could do anything AI can do (except AI doesn't complain about it all the time, or demand clout).

                                                                                                                                                                                                                                                                                                    What is the largest bottleneck and hinderance to open source adaption? Money? No, many, including myself are willing to spend for it. I've even lucked out trying to pay an open source project maintainer to support their software. It's always support.

                                                                                                                                                                                                                                                                                                    Support means triaging bugs, and feature requests in a timely manner. You know what helps with that a lot? A tool that understand code generation and troubleshooting well, along with natural language processing. A bot that can read what people are requesting, and give them feedback until their reports meet a certain criteria of acceptability, so you as a developer don't have to deal with the tiring back and forth with them. that same tool can generate code in feature branches. fix people's PR's so it meets your standards and priorities. highlight changes and how they affect your branch, prioritize them for you, so you can spend minimal time reviewing code and accepting or rejecting PRs.

                                                                                                                                                                                                                                                                                                    If that isn't good for open source then what is?

                                                                                                                                                                                                                                                                                                    Bad attitude towards AI is destroying open source projects led by people entrenched in an all-or-nothing false dichotomy mindset against AI. And AI itself is good. not great, not replace-humans-great, but good enough for it's intended use. great with cooperative humans in the decision making loop.

                                                                                                                                                                                                                                                                                                    Use the best tool for the task!

                                                                                                                                                                                                                                                                                                    that should be like #2 in the developer rule book, with #1 being:

                                                                                                                                                                                                                                                                                                    It needs to work.

                                                                                                                                                                                                                                                                                                    • bigstrat2003 2 days ago

                                                                                                                                                                                                                                                                                                      > Good doesn't mean perfect. it doesn't mean flawless.

                                                                                                                                                                                                                                                                                                      "Good" means "I can trust it to give me code that is at least as good as what a moderately skilled human would produce". They still aren't there, even after years of development. They still regularly give you code that doesn't follow the correct logic, or which isn't even syntactically valid. They are not good, or even remotely good.

                                                                                                                                                                                                                                                                                                      • notepad0x90 a day ago

                                                                                                                                                                                                                                                                                                        That's just your expectation. if it can do as much as the least competent human, that's already a huge deal. You're expecting it to think for you instead of assist you.

                                                                                                                                                                                                                                                                                                        You know what it is capable of, use it accordingly. it saves lots of time in troubleshooting, and generating starter code. in some cases, it can generate full featured complete production apps that people are using without major issues on its own.

                                                                                                                                                                                                                                                                                                        Even with your example, you have to fix syntax and errors here and there, instead of writing it from scratch. Which approach takes more time, that depends on the model, the code and you. like the author, your measuring stuck is humans for some reason.

                                                                                                                                                                                                                                                                                                        You know it's not really "AI" right, that's just a marketing term. there is no intelligence involved. it's auto completion. your argument is like saying IDE auto completion isn't always great so it should never be used.

                                                                                                                                                                                                                                                                                                    • michelsedgh 2 days ago

                                                                                                                                                                                                                                                                                                      I think I have seen more open source projects get released since LLMs came out and the rate seems to be increasing. The cost of making software and open sourcing it has gone down a lot. We see some slop but as the models get better, the quality will get better and from the pace I have seen we went from gpt-3.5 to now opus4.6 i dont think it will be long before the LLMs get much better than humans in coding!

                                                                                                                                                                                                                                                                                                      • tayo42 2 days ago

                                                                                                                                                                                                                                                                                                        Llms are already better then most people at coding for typical tasks imo.

                                                                                                                                                                                                                                                                                                        I finally got around to Claude code and the code it generates and the debugging it does is pretty good.

                                                                                                                                                                                                                                                                                                        Inb4 some random accuses me of being an idiot or shit engineer lol

                                                                                                                                                                                                                                                                                                        • michelsedgh 2 days ago

                                                                                                                                                                                                                                                                                                          Couldn’t agree more, people forget most software out there has generally shitty code anyways. Also this is the worst the llms will be and they will only get better as time goes on…

                                                                                                                                                                                                                                                                                                      • ChicagoDave 2 days ago

                                                                                                                                                                                                                                                                                                        I actually think GenAI will create MORE open source code and as long as devs use quality controls like TDD and SonarQube the code will evolve into reusable works.

                                                                                                                                                                                                                                                                                                        • kristopolous 2 days ago

                                                                                                                                                                                                                                                                                                          I've never heard of sonarqube ... this looks very enterprisey ... isn't this just prompt engineering over the source with a harness? Why am I clicking through all this signup flow?

                                                                                                                                                                                                                                                                                                          I'd buy the put this in your ".git/hooks" workflow ... but I don't know what's going on with this thing.

                                                                                                                                                                                                                                                                                                          The strongest opensource contributors tend to be kinda weird - like they don't have a google account and use some kind of libre phone os that you've never heard of.

                                                                                                                                                                                                                                                                                                          What a "real" solution would look like is some kind of "guardrails" format where they can use an lsp or treesitter to give dos and donts and then have a secondary auditing llm punt the code back.

                                                                                                                                                                                                                                                                                                          There may be tools (coderabbit?) that do this ... but that's realistically what the solution will be - local llms, self-orchestrated.

                                                                                                                                                                                                                                                                                                          • ChicagoDave 2 days ago

                                                                                                                                                                                                                                                                                                            SonarQube does static analysis and let's you set your own levels. Yes, enterprises use it for code and test quality as well as security checks.

                                                                                                                                                                                                                                                                                                            I was just saying that good engineers can guide GenAI into creating good code bases. Seeing I got voted down, not everyone agrees.

                                                                                                                                                                                                                                                                                                            • kristopolous 2 days ago

                                                                                                                                                                                                                                                                                                              eh, it sounds like you're hawking your own product. It doesn't look like you are and this looks to be a mass adopted fortune-100 product without large brand name awareness, but that's the risk with hn.

                                                                                                                                                                                                                                                                                                              There's a lot of people trying to hustle their stuff on here. Strongly frowned upon unless it's genuinely free and even then...

                                                                                                                                                                                                                                                                                                              Maybe something like "at work we use something called sonarqube and I've been using it on my own stuff. it's works really nice" might have been better

                                                                                                                                                                                                                                                                                                              • ChicagoDave a day ago

                                                                                                                                                                                                                                                                                                                I was mostly pointing out that you can still create reusable open source software with GenAI. I could care less what tools you use but I do think strong engineering principles are the common denominator.

                                                                                                                                                                                                                                                                                                          • jandrewrogers 2 days ago

                                                                                                                                                                                                                                                                                                            SonarQube is pretty useless for quality control unless your process is already broken, in which case you should probably fix your process.

                                                                                                                                                                                                                                                                                                            I once worked at a company where the powers-that-be decided to add SonarQube with max settings to the pipeline for a large C++ code base. It produced no output so IT thought the install was broken. They eventually figured out that it was actually working perfectly but that it never found any issues across the entire code base ever. We got that for free with sensible build configurations long before it got to SonarQube.

                                                                                                                                                                                                                                                                                                            TDD and tools are not a substitute for competent process. I’ve seen plenty of TDD produce objectively poor quality code bases.

                                                                                                                                                                                                                                                                                                          • ramshanker 2 days ago

                                                                                                                                                                                                                                                                                                            At least for me personal open source project[1], it has been >5x boost. In speed, motivation. Operating knowledge level etc. At some places, I even put inline comment, "this generated function is not understood completely" ! Or may be a question on specific syntex (c++20).

                                                                                                                                                                                                                                                                                                            [1] https://github.com/ramshankerji/Vishwakarma/

                                                                                                                                                                                                                                                                                                            • OneOffAsk 2 days ago

                                                                                                                                                                                                                                                                                                              > this generated function is not understood completely

                                                                                                                                                                                                                                                                                                              I think this kind of stuff is OK for the most part. I think it's a thrilling part of computer science: building systems so complex they're just on the brink of what can be fully understood by a single person. It's what sets software engineering apart from other engineering fields where it's unacceptable not to fully understand the engineering, say, for factories, buildings, bridges, ships and infrastructure and such.

                                                                                                                                                                                                                                                                                                              • bigstrat2003 2 days ago

                                                                                                                                                                                                                                                                                                                What? It's not ok at all. If you don't understand what the code does, you have no business submitting that code.

                                                                                                                                                                                                                                                                                                            • loeber 2 days ago

                                                                                                                                                                                                                                                                                                              This is a deeply pessimistic take, and I think it's totally incorrect. While I believe that the traditional open source model is going to change, it's probably going to get better than ever.

                                                                                                                                                                                                                                                                                                              AI agents mean that dollars can be directly translated into open-source code contributions, and dollars are much less scarce than capable OSS programmer hours. I think we're going to see the world move toward a model by which open source projects gain large numbers of dollar contributions, that the maintainers then responsibly turn into AI-generated code contributions. I think this model is going to work really, really well.

                                                                                                                                                                                                                                                                                                              For more detail, I have written my thoughts on my blog just the other day: https://essays.johnloeber.com/p/31-open-source-software-in-t...

                                                                                                                                                                                                                                                                                                              • matteotom 2 days ago

                                                                                                                                                                                                                                                                                                                Funding for open source projects has been a problem for about as long as open source projects have existed. I'm not sure I follow why you think specifying donations will go towards LLM tokens will suddenly open the floodgates.

                                                                                                                                                                                                                                                                                                                • loeber 2 days ago

                                                                                                                                                                                                                                                                                                                  If you don't get it, then you should read the blog post and come back if you have questions.

                                                                                                                                                                                                                                                                                                                  • matteotom 2 days ago

                                                                                                                                                                                                                                                                                                                    I did. Your argument seems to be that LLMs allow users who want specific features to direct a donation specifically towards the (token) costs of developing that feature. But I don't see how that's any different from just offering to pay someone to implement the feature you want. In fact, this does happen, eg in the case of companies hiring Linux devs; but it hasn't worked as a general purpose OSS-funding mechanism.

                                                                                                                                                                                                                                                                                                                    • loeber 2 days ago

                                                                                                                                                                                                                                                                                                                      Because offering to pay people to implement features is very expensive and tends to take a long time, if they do it at all. Often, they can't even find people to pay to implement things.

                                                                                                                                                                                                                                                                                                                      In the case of companies hiring Linux devs, that is is very, very costly and thereby inaccessible. Scale makes it different from the scenario of paying a few dollars to contribute tokens to fix a bug.

                                                                                                                                                                                                                                                                                                                      • matteotom a day ago

                                                                                                                                                                                                                                                                                                                        It seems the assumption you're making without justifying is LLMs will significantly reduce the cost of software development. Even if LLMs can reliably write new features (or even just fix bugs), the maintainer still needs to spend time (which is not free) verifying and code-reviewing the LLM-produced code.

                                                                                                                                                                                                                                                                                                                    • jscd 2 days ago

                                                                                                                                                                                                                                                                                                                      Wow, impressively insufferable

                                                                                                                                                                                                                                                                                                                  • abrookewood 2 days ago

                                                                                                                                                                                                                                                                                                                    There are a few valid arguments that I see to support the pessimism:

                                                                                                                                                                                                                                                                                                                    1. When people use LLMs to code, they never read the docs (why would they), so they miss the fact that the open source library may have a paid version or extension. This means that open source maintainers will receive less revenue and may not be able to sustain their open source libraries as a result. This is essentially what the Tailwind devs mentioned.

                                                                                                                                                                                                                                                                                                                    2. Bug bounties have encouraged people to submit crap, which wastes maintainers time and may lead them to close pull requests. If they do the latter, then they won't get any outside help (or at least, they will get less). Even if they don't do that, they now have a higher burden than previously.

                                                                                                                                                                                                                                                                                                                    • SoftTalker 2 days ago

                                                                                                                                                                                                                                                                                                                      Bug bounties had this risk from day one. Any time you create a reward for something there will be people looking to game it for maximal personal benefit. LLMs and coding agents have just made it that much easier to churn out "vulnerability" reports and amplified it.

                                                                                                                                                                                                                                                                                                                    • avaer 2 days ago

                                                                                                                                                                                                                                                                                                                      But locally, dollars are a zero-sum game. Your dollars came from someone else. If you make a project better for yourself without making it better for others you can possibly one-up others and make more dollars with it. If you make it better for everyone that's not necessarily the case. You're just diluting your money and soon enough you won't have money and you're eliminated from the race.

                                                                                                                                                                                                                                                                                                                      While I'd like to believe in the decency and generosity of humans, I don't get the economic case of donating money to the agent behind an OS project, when the person could spend the money on the tokens locally themselves and reap the exclusive reward. If it really is just about money that only makes sense.

                                                                                                                                                                                                                                                                                                                      Obviously this is a gross oversimplification, but I don't think you can ignore the rational economics of this, since in capitalism your dollars are earned through competition.

                                                                                                                                                                                                                                                                                                                      • xyzzy123 2 days ago

                                                                                                                                                                                                                                                                                                                        Would be cool if you could donate to maintainer's favourite bot to get bugs fixed.

                                                                                                                                                                                                                                                                                                                        Usually, getting stuff fixed on main is better than being forced to maintain a private fork.

                                                                                                                                                                                                                                                                                                                      • voxl 2 days ago

                                                                                                                                                                                                                                                                                                                        Open source will ban AI, I'd bet $100 that AI will get banned more and more often entirely from large OSS

                                                                                                                                                                                                                                                                                                                        • mythrwy 2 days ago

                                                                                                                                                                                                                                                                                                                          How will they know who wrote the code?

                                                                                                                                                                                                                                                                                                                        • lovich 2 days ago

                                                                                                                                                                                                                                                                                                                          Why would people/companies donate more money to open source in the future that they don’t already donate today?

                                                                                                                                                                                                                                                                                                                          It’s a tragedy of the commons problem. Most of the money available is not tied up to decision makers who are ideologically aligned with open source, so I don’t see why they’d donate any more in the future.

                                                                                                                                                                                                                                                                                                                          They usually do so because they are critically reliant on a library that’s going to die, think it’s good PR, makes engineers happy(don’t think they care about that anymore), or they think they can gain control of some aspect of industry(looking at you futurewei and the corporate workers of the Rust project)

                                                                                                                                                                                                                                                                                                                          • loeber 2 days ago

                                                                                                                                                                                                                                                                                                                            Because donating to open source projects today has an extremely unclear payoff. For example, I donate to KDE, which is my favorite Linux desktop environment. However, this does not have a measurable impact on my day-to-day usage of KDE. It's very abstract in that I'm making a tiny, opaque contribution to its development, but I have no influence on what gets developed.

                                                                                                                                                                                                                                                                                                                            More concretely, there are many features that I'd love to see in KDE which don't currently exist. It would be amazing if I could just donate $10, $20, $50 and submit a ticket for a maintainer to consider implementing the feature. If they agree that it's a feature worth having, then my donation easily covers running AI for an hour to get it done. And then I'd be able to use that feature a few days later.

                                                                                                                                                                                                                                                                                                                            • sarchertech 2 days ago

                                                                                                                                                                                                                                                                                                                              1. You can already do that it just costs more than $10.

                                                                                                                                                                                                                                                                                                                              2. Even assuming the AI can crap out the entire feature unassisted, in a large open source code base the maintainer is gonna to spend a sizeable fraction of the time reviewing and testing the feature as they would have coding it. You’re now back to 1.

                                                                                                                                                                                                                                                                                                                              Conceivably it might make it a little cheaper, but not anywhere close to the kind of money you’re talking about.

                                                                                                                                                                                                                                                                                                                              Now if agents do get so good that no human review is required, you wouldn’t bother with the library in the first place.

                                                                                                                                                                                                                                                                                                                              • saimiam 2 days ago

                                                                                                                                                                                                                                                                                                                                > Now if agents do get so good that no human review is required, you wouldn’t bother with the library in the first place.

                                                                                                                                                                                                                                                                                                                                The comment you responded to is (presumably) talking about the transition phase where LLMs can help implement but not fully deliver a feature and need human oversight.

                                                                                                                                                                                                                                                                                                                                If there are reasonably good devs in low CoL areas who can coax a new feature or bug fix for an open source project out of an LLM for $50, i think it’s worth trialling as a business model.

                                                                                                                                                                                                                                                                                                                                • sarchertech 2 days ago

                                                                                                                                                                                                                                                                                                                                  Did you skip the first part of my comment where I specifically addressed that.

                                                                                                                                                                                                                                                                                                                                  Even if the human is only doing review and QA, there’s no low cost of living area where $50 get you enough time to do those things from someone with enough competence to do them. Much less $10.

                                                                                                                                                                                                                                                                                                                                • lovich 2 days ago

                                                                                                                                                                                                                                                                                                                                  Yea, that’s the ideologically not aligned part I referenced.

                                                                                                                                                                                                                                                                                                                                  If AI can make features without humans why would I, as a profit maximizing organization, donate that resource instead of keeping it in house? If we’re not gonna have human eyes on it then we’re not getting more secure, I don’t really think positive PR would exist for that, and it would deny competitors resources you now have that they don’t.

                                                                                                                                                                                                                                                                                                                            • invalidname 2 days ago

                                                                                                                                                                                                                                                                                                                              As a maintainer of a medium size OSS project I agree. We've been running the produce for over a decade and a few years back Google came out with a competitor that pretty much sucked the air out of our field. It didn't matter that our product was better, we didn't have the resources to compete with a google hobby project.

                                                                                                                                                                                                                                                                                                                              As a result our work on the project got reduced to maintenance until coding agents got better. Over the past year I've rewritten a spectacular amount of the code using AI agents. More importantly, I was able to construct enterprise level testing which was a herculean task I just couldn't take up on my own.

                                                                                                                                                                                                                                                                                                                              The way I see it, AI brought back my OSS project that was heading to purgatory.

                                                                                                                                                                                                                                                                                                                              EDIT: Also about OPs post. It's really f*ing bug bounties that are the problem. These things are horrible and should die in fire...

                                                                                                                                                                                                                                                                                                                              • kerkeslager 2 days ago

                                                                                                                                                                                                                                                                                                                                > AI agents mean that dollars can be directly translated into open-source code contributions, and dollars are much less scarce than capable OSS programmer hours.

                                                                                                                                                                                                                                                                                                                                I think this is true, but misses the point: quantity of code contributions is absolutely useless without quality. You're correct that OSS programmer hours are the most scarce asset OSS has, but AI absolutely makes this scarce resource even more scarce by wasting OSS programmers' time sifting through clanker slop.

                                                                                                                                                                                                                                                                                                                                There literally isn't an upside. The code produced by AI simply isn't good enough consistently enough.

                                                                                                                                                                                                                                                                                                                                That's setting aside the ethical issues of stealing other people's work and spewing even more carbon into the atmosphere.

                                                                                                                                                                                                                                                                                                                                • Ygg2 2 days ago

                                                                                                                                                                                                                                                                                                                                  Great.

                                                                                                                                                                                                                                                                                                                                  Give money to maintainers? No.

                                                                                                                                                                                                                                                                                                                                  Give money to bury maintainers in AI Slop? Yes.

                                                                                                                                                                                                                                                                                                                                  • Snakes3727 2 days ago

                                                                                                                                                                                                                                                                                                                                    Hi I just wanted to let you know your article screams like it was written by AI as you fail to go into any real explanation for anything.

                                                                                                                                                                                                                                                                                                                                    I can summarize your entire essay as frankly:

                                                                                                                                                                                                                                                                                                                                    "We can give maintainers of OSS projects money to maintain projects" revolutionary never been done before. /S