• octoberfranklin 2 days ago

    > An open pull request represents a commitment from maintainers: that the contribution will be reviewed carefully and considered seriously for inclusion.

    This has always been the problem with github culture.

    On the Linux and GCC mailing lists, a posted patch does not represent any kind of commitment whatsoever from the maintainers. That's how it should be.

    The fact that github puts the number of open PR requests at the very top of every single page related to a project, in an extremely prominent position, is the sort of manipulative "driving engagement" nonsense you'd expect from social media, not serious engineering tools.

    The fact that you have to pay github money in order to permanently turn off pull requests or issues (I mean turn off, not automatically close with a bot) is another one of these. BTW codeberg lets any project disable these things.

    • littlecranky67 2 days ago

      I have an old open-source project that I archived on GitHub (because I do not maintain it anymore). Once a user opened an issue with a completely unrelated project of mine (same user account than the archived one), posting some AI slop with step-by-step click instructions how to unarchive the project and enable issues etc. He spammed the same text to two different email addresses he found from my Github page and the git history. I banned that user immediately from opening issues on that said project, closed the issue and ignored him. Just to receive another outrageous email why I did not comply with his request, and how I would dare to ban him from opening further issues. I swear, the entitlement sometimes on GitHub is unbearable.

      • account42 2 days ago

        Also turns out the mildly higher barrier to entry for mailing-based workflows is actually a feature.

      • oneeyedpigeon 2 days ago

        We've enjoyed a certain period (at least a couple of decades) of global, anonymous collaboration that seems to be ending. Trust in the individual is going to become more important in many areas of life, from open-source to journalism and job interviews.

        • theshrike79 2 days ago

          I've been trying to manifest Web of Trust coming back to help people navigate towards content that's created by humans.

          A system where I can mark other people as trusted and see who they trust, so when I navigate to a web page or in this case, a Github pull request, my WoT would tell me if this is a trusted person according to my network.

          • jacquesm 2 days ago

            You need a very complex weighing and revocation mechanism because once one bad player is in your web of trust they can become a node along which both other bad players and good players alike can join.

            • thephyber 2 days ago

              Trust in the real world is not immutable. It is constantly re-evaluated. So the Web of Trust concept should do this as well.

              Also, there needs to be some significant consequence to people who are bad actors and, transitively, to people who trust bad actors.

              The hardest part isn’t figuring out how to cut off the low quality nodes. It’s how to incentivize people to join a network where the consequences are so high that you really won’t want to violate trust. It can’t simply be a free account that only requires an a verifiable email address. It will have to require a significant investment in verifying real world identity, preventing multiple accounts, reducing account hijackings, etc. those are all expensive and high friction.

              • phoplar a day ago

                I really don't want to expand the surveillance state...

                • theshrike79 20 hours ago

                  Are GPG signing parties part of the “surveillance state”?

                  It is the exact thing this system needs

              • theshrike79 2 days ago

                Then I can see who added that bad player and cut off everyone who trusted them (or decrease the trust level if the system allows that).

                • embedding-shape 2 days ago

                  Build a tree, cut the tree at the first link, now you get rid of all of them. Will have some collateral damage though, but maybe safe to assume actually "good players" can rejoin at another maybe more stable leaf

                  • jacquesm 2 days ago

                    It's a web, not a tree... so this is really not that simple.

                    • embedding-shape 2 days ago

                      Yeah, that's the problem, and my suggestion is to change it from a web to a tree instead, to solve that issue.

                      • jacquesm 2 days ago

                        That does not work because you won't have multiple parties vouching for a new entrant. That's the whole reason a web was chosen instead of a tree in the first place. Trees are super fragile in comparison, bad actors would have a much bigger chance of going undetected in a tree like arrangement.

                        • theshrike79 2 days ago

                          What is a web if not multiple trees that have interconnected branches? :)

                          • embedding-shape 2 days ago

                            In the end, it's all lists anyways :)

                            • foobar10000 18 hours ago

                              Well - lists of tuples. Otherwise knows as a graph :)

                    • 0ckpuppet 2 days ago

                      aka clown explosion

                    • 1718627440 15 hours ago

                      The problem is that even the people I would happily take advise from when meeting in real life, occasionally mindlessly copy AI-output about subjects they don't know about. And they see nothing wrong with it.

                      • solaire_oa 20 hours ago

                        I've been thinking this exact thing! But it's too abstract a thought for me to try creating anything yet.

                        A curation network, one which uses SSL-style chain-of-trust (and RSS-style feeds maybe?) seems like it could be a solution, but I'm not able to advance the thought from just being an amorphous idea.

                        • IsTom 2 days ago

                          Unfortunately trust isn't transitive.

                          • dizhn a day ago

                            Trust and do what with it though. I trust Chomsky but I can mark his interviews "Don't show" because I'm sick of them. Or like Facebook lets your follow 'friend' but ignore them. So trust and do what with that trust? A network of people who'll let each other move on short notice ? Something like that?

                            • thephyber 2 days ago

                              I would go even further. I only want to see content created by people who are in a chain of trust with me.

                              AI slop is so cheap that it has created a blight on content platforms. People will seek out authentic content in many spaces. People will even pay to avoid the mass “deception for profit” industry (eg. Industries where companies bot ratings/reviews to profit and where social media accounts are created purely for rage bait / engagement farming).

                              But reputation in a WoT network has to be paramount. The invite system needs a “vouch” so there are consequences to you and your upstream vouch if there is a breach of trust (eg. lying, paid promotions, spamming). Consequences need to be far more severe than the marginal profit to be made from these breaches.

                              • jauntywundrkind 2 days ago

                                At protocol (Bluesky) will I hope have better trust signals, since your Personal Data Server stores your microblog/posts and a bunch of other data. And the data is public. It's much harder to convincingly fake being a cross-media human.

                                If someone showed up on at-proto powered book review site like https://bookhive.buzz and started trying to post nonsense reviews, or started running bots, it would be much more transparent what was afoot.

                                More explicit trust signalling would be very fun to add.

                              • embedding-shape 2 days ago

                                > global, anonymous collaboration that seems to be ending. Trust in the individual is going to become more important in many areas of life

                                I don't think it's coming to an end. It's getting more difficult, yes, but not impossible. Currently I'm working on a game, and since I'm not an artist, I pay artists to create the art. The person I'm working closest with I have basically no idea who they are, except their name, email and the country they live in. Otherwise it's basically "they send me a draft > I review/provide feedback > Iterate until done > I send them money", and both of us know basically nothing of the other.

                                I agree that trust in the individual is becoming more important, but it's always been one of the most important thing for collaborations or anything that involves other human beings. We've tried to move that trust to other system, but seems instead we're only able to move the trust to the people building and maintaining those systems, instead of getting rid of it completely.

                                Maybe, "trust" is just here to stay, and we all be better off as soon as we start to realize this, and reconnect with the people around us and connect with the people on the other side of the world.

                                • willis936 2 days ago

                                  How do you know it's a person on the other end? Would you even see a difference if you had a computer generate that art?

                                  These are very important questions that cut to the heart of "what is art".

                                  • embedding-shape 2 days ago

                                    > How do you know it's a person on the other end? Would you even see a difference if you had a computer generate that art?

                                    Unless AI companies already developed and launched plugins/extensions for people to do something that looks like hand drawn sketches inside of Clip Studio, and suddenly got a lot better at understanding prompts (including having inspiration of their own), then I'm pretty sure it's a human.

                                    I don't think I'd get to see in-progress sketches and it wouldn't be as good at understanding what I wanted to have had changes then. I've used various generative AI image generators (latest one Qwen Image 2511 and a whole bunch of others) and none of them, including with "prompt enhancements" can take very vague descriptions of "I want it to feel like X" or "I'm not sure about Y but something like Z" and turn it into something that looks acceptable. At least not yet.

                                    And because I've spent a lot of time with various generative image making processes and models, I'm fairly confident I'd recognize if that was what was happening.

                                    • willis936 2 days ago

                                      Sure, it's true today. Entertain the hypothetical though because this is what the trillion dollar rush is aspiring to do in the near future. We should be thinking about our answers now.

                                      • embedding-shape 2 days ago

                                        Answers to what? Do I care what tools the artist use as long I get the results I want? I don't understand what you see as the issue, that I somehow think I'd be working with a human but it was a machine?

                                  • thephyber 2 days ago

                                    I think it absolutely is coming to an end in lots of ways.

                                    Movie/show reviews, product reviews, app/browser extension reviews, programming libraries, etc all get gamed. An entire industry of booting reviews has sprung up from PR companies brigading positive reviews for their clients.

                                    The better AI gets at slop and controlling bots to create slop which is indistinguishable from human content, the less people will trust content on those platforms.

                                    Your trust relationship with your artist almost certainly was based on something other than just contact info. Usually you review a portfolio, a professional profile, and you start with a small project to limit your downside risk. This tentative relationship and phased stages where trust is increased is how human trust relationships have always worked.

                                    • embedding-shape 2 days ago

                                      > Movie/show reviews, product reviews, app/browser extension reviews, programming libraries, etc all get gamed. An entire industry of booting reviews has sprung up from PR companies brigading positive reviews for their clients.

                                      But for a long time, unrelated to AI. When Amazon was first available here in Spain (don't remember exactly what year, but before LLMs for sure), the amount of fraudulent reviews filling the platform was already noticeable at that point.

                                      That industry you're talking about might have gotten new wings with LLMs, but it wasn't spawned by LLMs, it existed long time before that.

                                      > the less people will trust content on those platforms.

                                      Maybe I'm jarred from using the internet from a young age, but both me and my peers basically has a built-in mistrust against random stuff we see on the internet, at least compared to our parents and our younger peers.

                                      "Don't believe everything you see on the internet" been a mantra almost for as long as the internet has existed, maybe people forgot and needed an reminder, but it was never not true.

                                      • thephyber 2 days ago

                                        LLMs reduce the marginal cost per unit of content.

                                        When snail mail had a cost floor of $0.25 for the price of postage, email was basically free. You might get 2-3 daily pieces of junk mail in your house’s mailbox, but you would get hundreds or thousands in your email inbox. Slop comes at scale. LLMs didn’t invent spam, but they are making it easier to create more variants of it, and possibly ones that convert better than procedurally generated pieces.

                                        There’s a difference between your cognitive brain and your lizard brain. You can tell yourself that mantra, but still occasionally fall prey to spam content. The people who make spam have a financial incentive to abuse the heuristics/signals you use to determine the authenticity of a piece of content in the same way cheap knockoffs of Rolex watches, Cartier jewelry, or Chanel handbags have to make the knockoffs appear as authentic as possible.

                                        • pixl97 2 days ago

                                          >When snail mail had a cost floor of $0.25 for the price of postage

                                          Hence I suspect that quite a few of these interfaces that are now being spammed with AI crap will end up implementing something that represents a fee, a paywall, or a trustwall. That should keep armies of AI slop responses from being worthwhile.

                                          How we do that without killing some communities is yet to be seen.

                                    • contrast 2 days ago

                                      Your tone is disagreement, but it's not clear why?

                                      There is an individual who you trust to do good work, and who works well with you. They're not anonymous. Addressing the topic of this thread, you know (or should know) that it is not AI slop.

                                      That is a significant amount of knowledge and trust in an individual, and the very point I thought the GP was making.

                                    • globular-toast 2 days ago

                                      Some projects, like Linux (the kernel) have always been developed that way. Linus has described the trust model in the kernel to be very much "web of trust". You don't just submit patches directly to Linus, you submit them to module maintainers who are trusted by subsystem maintainers and who are all ultimately, indirectly trusted by the branch maintainer (Linus).

                                      • jruohonen a day ago

                                        > Trust in the individual is going to become more important in many areas of life, from open-source to journalism and job interviews.

                                        I'd add science here too.

                                        • undefined 2 days ago
                                          [deleted]
                                          • agumonkey 2 days ago

                                            trust in trust.. as programmer would say

                                            the web brought instant infinite 'data', we used to have limits, limits that would kinda ensure the reality of what is communicated.. we should go back to that it's efficient

                                          • sbondaryev 3 days ago

                                            Seems like reading the code is now the real work. AI writes PRs instantly but reviewing them still takes time. Everything flipped. Expect more projects to follow - maintainers can just use ai themselves without needing external contributions.

                                            • bigstrat2003 2 days ago

                                              Understanding (not necessarily reading) always was the real work. AI makes people less productive because it's speeding up the thing that wasn't hard (generating code), while generating additional burden on the thing that was hard (understanding the code).

                                              • corndoge 2 days ago

                                                There are many cases in which I already understand the code before it is written. In these cases AI writing the code is pure gain. I do not need to spend 30 minutes learning how to hold the bazel rule. I do not need to spend 30 minutes to write client boilerplate. List goes on. All broad claims about AI's effects on productivity have counterexamples. It is situational. I think most competent engineers quietly using AI understand this.

                                                • em-bee 2 days ago

                                                  In these cases AI writing the code is pure gain.

                                                  no, it isn't. unless the generated code is just a few lines long, and all you are doing is effectively autocompletion, you have to go through the generated code with a fine toothed comb to be sure it actually does what you think it should do and there are no typos. if you don't, you are fooling yourself.

                                                  • pixl97 2 days ago

                                                    > with a fine toothed comb to be sure it actually does what you think it should do and there are no typos. if you don't, you are fooling yourself

                                                    so the exact same thing you should be doing in code reviews anyway?

                                                    • em-bee a day ago

                                                      kind of, except that when i review a code submission to my project i can eventually learn to trust the submitter, once i realize they write good code. a code review is to develop that trust. AI code should never earn that trust, and any code review should always be treated like it it is from a first time submitter that i have never met before. the risk is that does not happen, and that we believe AI code submissions will develop like those of a real human. they won't. we'll develop a false sense of security, a false sense of trust. instead we should always be on guard.

                                                      and as i wrote in my other comment, reviewing the code of a junior developer includes the satisfaction of helping that developer grow through my feedback. AI will never grow. there is no satisfaction in reviewing its code. instead it feels like a sisyphusian task, because the AI will make the same mistakes over and over again, and make mistakes a human would be very unlikely to make. unlike human code with AI code you have to expect the unexpected.

                                                    • corndoge 2 days ago

                                                      Broadly I agree with you. I think of it in terms of responsibility. Ultimately the commit has my name on it, so I am the responsible party. From that perspective, I do need to "understand" what I am checking in to be reasonably sure it meets my professional standards of quality.

                                                      The reason I put scare quotes on "understand" is that we need to acknowledge that there are degrees of understanding, and that different degrees are required in different scenarios. For example, when you call syscall(), how well do you understand what is happening? You understand what's in the manpage; you know that it triggers a switch to kernel space, performs some task, returns some result. Most of us have not read the assembly code, we have a general concept of what is going on but the real understanding pretty much ends at the function call. Yet we check that in because that level of understanding corresponds to the general engineering standard.

                                                      In some cases, with AI, you can be reasonably sure the result is correct without deeply understanding it and still meet the bar. The bazel rule example is a good one. I prompt, "take this openapi spec and add build rules to generate bindings from it. Follow existing repo conventions." From my years of engineering experience, I already know what the result should look like, roughly. I skim the generated diff to ensure it matches that expectation; skim the model output to see what it referenced as examples. At that point, what the model produced is probably similar to what I would have produced by spending 30 minutes grepping around, reading build rules, et cetera. For this particular task, the model has saved me that time. I don't need to understand it perfectly. Either the code builds or it doesn't.

                                                      For other things, my standard is much higher. For example, models don't save me much time on concurrent code because, in order to meet the quality bar, the level of understanding required is much higher. I do need to sit there, read it, re-read it, chew on the concurrency model, et cetera. Like I said, it's situational.

                                                      There are many, many other aspects to quantifying the effects of AI on productivity, code quality is just one aspect. It's very holistic and dependent on you, how you work, what domain you work in, the technologies you work with, the team you work on, so many factors.

                                                    • Analemma_ 2 days ago

                                                      The problem is, even if all that is true, it says very little about the distribution of AI-generated pull requests to GitHub projects. So far, from what I’ve seen, those are overwhelmingly not done by competent engineers, but by randos who just submit a massive pile of crap and expect you to hurry up and merge it already. It might be rational to auto-close all PRs on GitHub even if tons of engineers are quietly using AI to deliver value.

                                                      • solid_fuel 2 days ago

                                                        > There are many cases in which I already understand the code before it is written. In these cases AI writing the code is pure gain.

                                                        That's only true if the LLM understands the code in the same way you do - that is, it shares your expectations about architecture and structure. In my experience, once the architecture or design of an application diverges from the average path extracted from training data, performance seriously degrades.

                                                        You wind up with the LLM creating duplicate functions to do things that are already handled in code, or using different libraries than your code already does.

                                                        • anileated 2 days ago

                                                          > There are many cases in which I already understand the code before it is written.

                                                          Typing speed is your bottleneck?

                                                          • falloutx 2 days ago

                                                            Unless you have made some exceptional advances in the LLM agents (if you have, send me the claude skill?), you cant predict it.

                                                            If it was predictable like a transpiler, you wouldn't have to read it. you can think of it as a pure gain but you are just not reading the code its outputting.

                                                            • csomar 2 days ago

                                                              I mean we did copy/paste before this? Also create-react-app is basically that. And probably better than a stochastic AI generating it.

                                                            • agumonkey 2 days ago

                                                              Very much disagree. When I type code I don't just press keys, I read, think, organize .. and the interplay between acting, seeing, watching, reevaluating was the fun part. There's a part of you that disappear if you only review the result of a generator. That's why it's less interesting imo

                                                              • freehorse 2 days ago

                                                                As not all codebases are well-written, I have found useful once to get an LLM to produce code that does X, essentially distilling from a codebase that does XYZ. I found that reviewing the code the LLM producced, after feeding the original codebase in the context, was easier than going through the (not very well-written) codebase myself. Of course this was just the starting point, there was a ton of things the LLM "misunderstood", and then there was a ton of manual work, but it is an (admittedly rarer) example for me where "AI-generated" code is easier to read than code written by (those) humans, and it was actually useful having that at that point.

                                                                • bwfan123 2 days ago

                                                                  > Understanding (not necessarily reading) always was the real work.

                                                                  Great comment. Understanding is mis-"understood" by almost everyone. :)

                                                                  Understanding a thing equates to building a causal model of the thing. And I still do not see AI as having a causal model of my code even though I use it every day. Seen differently, code is a proof of some statement, and verifying the correctness of a proof is what a code-review is.

                                                                  There is an analogue to Brandolini's bullshit asymmetry principle here. Understanding code is 10 times harder than reading code.

                                                                  • Ntrails 2 days ago

                                                                    Question:

                                                                    Which is harder, writing 200 lines of code or reading 200 lines of code someone else wrote.

                                                                    I pretty firmly find the latter harder, which means for me AI is most useful for finessing a roughly correct PR rather than writing the actual logic from scratch.

                                                                    • jchanimal 2 days ago

                                                                      It makes a great code reading tool if you use it mindfully. For instance, you can check the integrity of your tests by having it fuzz the implementation and ensure the tests fail and then git checkout to get clean again.

                                                                      • mannanj 2 days ago

                                                                        AI makes people less productive because it’s speeding up the thing that was hard: training AI for better future AI.

                                                                        The productivity gets siphoned to the AI companies owning the AI.

                                                                        • ironbound 2 days ago

                                                                          You'll be unsurprised how many AI poison pill projects are on GitHub

                                                                          • trhway 2 days ago

                                                                            That is how the main point of the Das Kapital looks in the modern, AI, world.

                                                                          • KronisLV 2 days ago

                                                                            > AI makes people less productive because it's speeding up the thing that wasn't hard (generating code), while generating additional burden on the thing that was hard (understanding the code).

                                                                            Only if the person doesn't want the AI to help in understanding how it works, in which case it doesn't matter whether they use AI or not (except without they couldn't push some slop out the door at all).

                                                                            If you want that understanding, I find that AI is actually excellent with it, when given proper codebase search tools and an appropriately smart model (Claude Code, Codex, Gemini), easily browsing features that might have dozens of files making them up - which I would absolutely miss some details of in the case of enterprisey Java projects.

                                                                            I think the next tooling revolution will probably be automatically feeding the model all of the information about how the current file fits within the codebase - not just syntax errors and automatically giving linter messages, but also dependencies, usages, all that.

                                                                            In my eyes, the "ideal" code would be simple and intuitive enough to understand so that you don't actually need to spend hours to understand how a feature works OR use any sort of AI tool, or codebase visualization as a graph (dependency and usage tracking) or anything like that - it just seems that you can't represent a lot of problems like that easily, given time constraints and how badly Spring Boot et al fucks up any codebase it touches with accidental complexity.

                                                                            But until then, AI actually helps, a lot. Maybe I just don't have enough working memory (or time) to go through 30 files and sit down and graph it out in a notebook like I used to, but in lieu of that an AI generated summary (alongside docs/code tests/whatever I can get, but seems like humans hate writing docs and ADRs, at least in the culture here) is good enough.

                                                                            At the same time, AI will also happily do incomplete refactoring or not follow the standards of the rest of the codebase and invent abstractions where it doesn't need any, if you don't have the tooling to prevent it automatically, e.g. prebuild checks (or the ability to catch it yourself in code review). I think the issue largely is limited context sizes (without going broke) - if I could give the AI the FULL 400k SLoC codebase and the models wouldn't actually start breaking down at those context lengths, it'd be pretty great.

                                                                            • account42 2 days ago

                                                                              Yeah I have always seen PRs from new contributors as having (on average) negative value but being an investment into a hopefully future positive contributor. I don't have that optimism for contributors that start out with AI slop.

                                                                            • steveruizok 2 days ago

                                                                              Reviewing code is much less of a burden if I can trust the author to also be invested in the output and have all the context they need to make it correct. That's true for my team / tldraw's core contributors but not for external contributors or drive-by accounts. This is nothing new and has up to now been worth the hassle for the benefits of contribution: new perspectives, other motivations, relationships with new programmers. It's just the scale of the problem and the risk that the repo gets overwhelmed by "claude fix this issue that I haven't even read" PRs.

                                                                              • Analemma_ 2 days ago

                                                                                This is probably true, and while I expect productivity to go up, I also expect "FOSS maintainer burnout" to skyrocket in the coming years.

                                                                                Everyone knows reading code is one-hundredth as fun as writing it, and while we have to accept some amount of reading as the "eating your vegetables" part of the job, FOSS project maintainers are often in a precarious enough position as it is re: job satisfaction. I think having to dramatically increase the proportion of reading to writing, while knowing full well that a bunch of what they are reading was created by some bozo with a CC subscription and little understanding of what they were doing, will lead to a bunch of them walking away.

                                                                                • em-bee 2 days ago

                                                                                  i have fun reading code, but the fun comes from knowing a human did this. if i find errors i get the satisfaction of teaching that human become a better developer by helping them realize the error and avoid it in the future. if the code is the contribution of a volunteer to a project of mine, even more so. that all goes out the window with AI generated code.

                                                                                  • binary132 2 days ago

                                                                                    Not to worry! Microslop probably has a product in the works to replace disgruntled open-source maintainers with agreeable, high-review-throughput agentic systems.

                                                                                  • patcon 2 days ago

                                                                                    In the civic tech hacknight community I'm part of, it's hard to collaborate the same now, at least when people are using AI. Mostly because now code often feels so disposable and fast. It's like the pace layers have changed

                                                                                    It's been proposed that we start collaborating in specs, and just keep regenerating the code like it's CI, to get back to the feeling of collaboration without holding back on the energy and speed of agent coding

                                                                                    • internetter 2 days ago

                                                                                      > Mostly because now code often feels so disposable and fast

                                                                                      I really like this thought. We used to take pride in elegant solutions and architectural designs. Now, in the era of shipping fast and AI, this has been disregarded. Redundancy is everywhere, spaghetti is normalized. AI code has always been unsettling for me and I think this is why.

                                                                                      • pjmlp 2 days ago

                                                                                        Think 1 <pick currency> shops, now that factories have fully taken over.

                                                                                        I see a future where those that survive are doing mostly architecture work, and a few druids are hired by AI companies.

                                                                                      • octoberfranklin 2 days ago

                                                                                        Clowns will just use LLMs to post slop comments in the spec discussions.

                                                                                      • em-bee 2 days ago

                                                                                        this is precisely why i refuse to use AI to generate code at all. i'd have to not only read it but internalize it and understand it in a way as if i had written it myself. at that point it is easier to actually write the code myself.

                                                                                        for prototypes and throwaway stuff where only the results count, it may be ok. but not for code that goes into a larger project. especially not FOSS projects where the review depends on volunteers.

                                                                                        • foretop_yardarm 2 days ago

                                                                                          I actually think Ada has good potential as an AI adjacent language because the syntax is optimised for readability (I personally find it very readable too.)

                                                                                          • timeon 2 days ago

                                                                                            I think problem is not with quality but quantity in reasonable time frame.

                                                                                          • reacharavindh 2 days ago

                                                                                            Using a coding agent over days on a personal project. It has made me think

                                                                                            1. These llms are smart and dumb at the same time. They make a phenomenal contribution in such a short time and also do a really dumb change that no one asked for. They break working code in irrational ways. I’ve been asking them to add so many tests for all the functions I care about. This acts as a first guard rail when they trip over themselves. Excessive tests.

                                                                                            2. Having a compiler like Rust’s helps to catch all sorts of mines that the llms are happy to leave.

                                                                                            3. The LLMs don’t have a proper working memory. Their context is often cluttered. I find that curating that context (what is being done, what was tried, what is the technical goal, specific requests etc) in concise yet “relevant for the time” manner helps to get them to not mess up.

                                                                                            Perhaps important open source projects that choose to accept AI generated PRs can have such excessive test suites, and run the PRs through them first as a idiotic filter before manually reviewing what the change does.

                                                                                            • quectophoton 2 days ago

                                                                                              Questions. I want to get into coding agents, so, out of curiosity: which one(s) did you use and how much money has it costed you? (Any metric is fine)

                                                                                            • ryanxcharles 2 days ago

                                                                                              you can use ai to review PRs. i do this daily.

                                                                                              • exactlie 2 days ago

                                                                                                [flagged]

                                                                                              • kanzure 2 days ago

                                                                                                That's interesting; another project stopped letting users directly open issues: https://news.ycombinator.com/item?id=46460319

                                                                                                • pella 2 days ago

                                                                                                  Check Ghostty "CONTRIBUTING.md#ai-assistance-notice"

                                                                                                    "The Ghostty project allows AI-assisted code contributions, which must be properly disclosed in the pull request."
                                                                                                  
                                                                                                  https://github.com/ghostty-org/ghostty/blob/main/CONTRIBUTIN...

                                                                                                  Mitchell Hashimoto (2025-12-30): "Slop drives me crazy and it feels like 95+% of bug reports, but man, AI code analysis is getting really good. There are users out there reporting bugs that don't know ANYTHING about our stack, but are great AI drivers and producing some high quality issue reports.

                                                                                                  This person (linked below) was experiencing Ghostty crashes and took it upon themselves to use AI to write a python script that can decode our crash files, match them up with our dsym files, and analyze the codebase for attempting to find the root cause, and extracted that into an Agent Skill.

                                                                                                  They then came into Discord, warned us they don't know Zig at all, don't know macOS dev at all, don't know terminals at all, and that they used AI, but that they thought critically about the issues and believed they were real and asked if we'd accept them. I took a look at one, was impressed, and said send them all.

                                                                                                  This fixed 4 real crashing cases that I was able to manually verify and write a fix for from someone who -- on paper -- had no fucking clue what they were talking about. And yet, they drove an AI with expert skill.

                                                                                                  I want to call out that in addition to driving AI with expert skill, they navigated the terrain with expert skill as well. They didn't just toss slop up on our repo. They came to Discord as a human, reached out as a human, and talked to other humans about what they've done. They were careful and thoughtful about the process.

                                                                                                  People like this give me hope for what is possible. But it really, really depends on high quality people like this. Most today -- to continue the analogy -- are unfortunately driving like a teenager who has only driven toy go-karts. Examples: https://github.com/ghostty-org/ghostty/discussions?discussio... " ( https://x.com/mitchellh/status/2006114026191769924 )

                                                                                                  • floortepw 2 days ago

                                                                                                    You conveniently left off the follow up.

                                                                                                    > @zeroxBigBoss: .. It's not all AI, I have experience with Zig and MacOS, ..

                                                                                                    > @mitchellh: I appreciate it! And my bad on the experience, I must have misunderstood or misremembered your messages

                                                                                                    Use xcancel. For the very least to see an entire thread.

                                                                                                    • suddenlybananas 2 days ago

                                                                                                      Every time.

                                                                                                      • overfeed 2 days ago

                                                                                                        Another victory notch for the "AI Influentist" article[1].

                                                                                                        Step 1: thought leader reveals Shocking(tm) AI achievement

                                                                                                        Step 2: post gets traction

                                                                                                        Step 3: additional context is revealed, dragging the original claim from the realm of the miraculous to "merely" useful.

                                                                                                        I don't think Mitchell intentionally misrepresented/exaggerated, but the phenomenon is reccuring. What's the logical explanation for the frequency?

                                                                                                        1. https://news.ycombinator.com/item?id=46623195

                                                                                                        • dormento 2 days ago

                                                                                                          Getting tiresome, isn't it?

                                                                                                      • freehorse 2 days ago

                                                                                                        Apart from the external person turning out having experience with zig and macos (but not on developing terminals and rendering stuff), this is a good imo example of what ai can be used well for: writing one-off code/tools for which it is enough that it is just working (even if not perfectly), but one does not really care about maintaining, because it is meant to be used only on a specific occasion/context. In this case, the external person was smart enough to use AI to identify the problems and not to produce "fixes" to send as a PR.

                                                                                                        Imo, an issue is that the majority of people who submit AI slop as PRs have different motivations than this person (developing a PR portfolio whatever that may mean), or are much less competent and eager to do actual work themselves (which AI use can worsen).

                                                                                                    • blibble 2 days ago

                                                                                                      > With luck, GitHub will soon roll out management features that let us open things back up.

                                                                                                      I wouldn't bet on it

                                                                                                      SlopHub

                                                                                                      • smetj 2 days ago

                                                                                                        Generally speaking, the value of these contributions was determined by "proof of work". Time and effort are precious to a human hence its a somewhat self-regulating system preventing huge amounts of low quality contributions being generated. This is now gone. Isn't that an interesting problem to fix?

                                                                                                        • ironbound 2 days ago
                                                                                                          • kristopolous 2 days ago

                                                                                                            Book publishers have stopped accepting unsolicited submissions for the same reason.

                                                                                                            You need a literary agent for just about all of them

                                                                                                            • andybak 2 days ago

                                                                                                              > and little to no follow-up engagement from their authors.

                                                                                                              A strategy I sometimes use for external contributions is to immediately ask a question about the pull request. Ignoring PRs where I don't get a reply or the reply doesn't make sense potentially eliminates a lot of low quality contributions.

                                                                                                              I wonder if a "no AI" rule is an overly blunt instrument. I can sympathise with it but babies and bathwater etc.

                                                                                                              • MohskiBroskiAI 6 hours ago

                                                                                                                This is the inevitable result of probabilistic coding.

                                                                                                                The current wave of "AI Coding Agents" are just wrappers around Vector DBs that fetch fuzzy context. They don't "understand" the codebase; they statistically guess the next token based on a cosine similarity match.

                                                                                                                Of course they generate subtle bugs. They have no concept of topological consistency.

                                                                                                                I realized this 3 months ago and stopped using standard agents. I built a local memory protocol (Remember-Me) that uses Wasserstein Distance to enforce strict consistency before the AI is allowed to write a line of code. If the memory doesn't mathematically fit the context topology, it rejects the edit.

                                                                                                                We need to move from "Generative" coding to "Verifiable" coding, or this slop will drown every OSS maintainer.

                                                                                                                • junon 2 days ago

                                                                                                                  They invited AI in by creating a comprehensive list of instructions for AI agents - in the README, in a context.md, and even as yarn scripts. What did they expect?

                                                                                                                  • steveruizok 2 days ago

                                                                                                                    Hey, Steve from tldraw here. We use AI tools to develop tldraw. The tools are not the problem, they're just changing the fundamentals (e.g. a well-formed PR is no longer a sign of thoughtful engagement, a large PR shows more effort than a small PR, etc) and accelerating other latent issues in contribution.

                                                                                                                    About the README etc: we ship an SDK and a lot of people use our source code as docs or a prototyping environment. I think a lot about agents as consumers of the codebase and I want help them navigate the monorepo quickly. That said, I'm not sure if the CONTEXT.md system I made for tldraw is actually that useful... new models are good at finding their way around and I also worry that we don't update them enough. I've found that bad directions are worse than no directions over time.

                                                                                                                    • stavros 2 days ago

                                                                                                                      This is my experience as well. I work with AI agents a lot, they are very useful. What's not useful is some passer-by telling the AI "implement <my favorite feature>" and then sending that as a PR. I could have written a sentence to the LLM too if I wanted to, you aren't really giving me or the project any value by doing that.

                                                                                                                      Now that writing the code is the easy part, we're just going to transition to having very few contributors, who are needed for their architectural skills, product vision, reasoned thinking, etc, rather than pure code-writing.

                                                                                                                      • account42 2 days ago

                                                                                                                        Outside contributors have never been valuable as pure code monkeys.

                                                                                                                        • nwallin 2 days ago

                                                                                                                          Every inside contributor (besides the original author) started as an outside contributor. If the solution to the problem of LLMs is a blanket ban on outside contributors, I fear for the future of open source.

                                                                                                                          • stavros 2 days ago

                                                                                                                            I disagree, when code took hours to write, it was very useful to have someone drive by and fix a bug for you. Now, all that does is it saves you five minutes of LLM crunching.

                                                                                                                      • ggbaker 2 days ago

                                                                                                                        The CONTEXT.md file was created 5 months ago, and the contribution policy changed today. I would interpret that as a good-faith attempt to work with AI agents, which with some experience, didn't work as well as they hoped.

                                                                                                                        • embedding-shape 2 days ago

                                                                                                                          Wouldn't that be for their usage? It's presence doesn't implicitly mean they want incomplete PRs submitted to their repository constantly.

                                                                                                                          • Seattle3503 2 days ago

                                                                                                                            I still find it useful to vibe code in a private fork. For example with yt-dlp its now super easy to add a website with Claude for personal use, but that doesn't mean it's appropriate to open a PR.

                                                                                                                          • judahmeek 2 days ago

                                                                                                                            A LinkedIn comment I made on an adjacent topic:

                                                                                                                            > If the job market is unfavourable to juniors, become senior.

                                                                                                                            That requires networking with a depth deep enough that other professionals are willing to critique your work.

                                                                                                                            So... open-source contributions, I guess?

                                                                                                                            This increases pressure on senior developers who are the current maintainers of open-source packages at the same time that AI is stealing the attention economy that previously rewarded open-source work.

                                                                                                                            Seems like we need something like blockchain gas on open-source PRs to reduce spam, incentivize open-source maintainers, and enable others to signal their support for suggestions while also putting money where their mouth is.

                                                                                                                            • oguz-ismail2 2 days ago

                                                                                                                              > If the job market is unfavourable to juniors, become senior.

                                                                                                                              Don't love your job, job your love.

                                                                                                                              • allarm 17 hours ago

                                                                                                                                > If the job market is unfavourable to juniors, become senior.

                                                                                                                                That’s just the regular LinkedIn nonsense. Very few people have the time and other resources to become seniors while unemployed. On top of that, it’s still unlikely that they’ll pass the HR filter without senior positions on their resumes, regardless of their actual knowledge.

                                                                                                                                • crguixurcghixr 2 days ago

                                                                                                                                  [dead]

                                                                                                                                • shevy-java 2 days ago

                                                                                                                                  Didn't take long before the quality went downhill.

                                                                                                                                  Skynet was evil and impressive in The Terminator. Skynet 3.0 in reallife sucks - the AI slop annoys the hell out of me. I now need a browser extension that filters away ALL AI.

                                                                                                                                  • lifetimerubyist 2 days ago

                                                                                                                                    At first I aggressively banned anyone that submitted slop to my projects.

                                                                                                                                    Then I just took my hosting private. I can’t be arsed to put in the effort when they don’t.

                                                                                                                                    • exactlie 2 days ago

                                                                                                                                      > <BROWN AND WHITE DRAWING OF AN ASSHOLE> claude added the Task issue type 4 hours ago

                                                                                                                                      is this satire?

                                                                                                                                      • steveruizok 2 days ago

                                                                                                                                        I have a GitHub action that labels and tags issues automatically. It also standardizes the issue title. I love this script and would recommend it to anyone. https://github.com/tldraw/tldraw/blob/ce745d1ecc1236633d2bf6...

                                                                                                                                        • stavros 2 days ago

                                                                                                                                          This is excellent, thank you.

                                                                                                                                          • account42 2 days ago

                                                                                                                                            That's a sure way to guarantee that I will never contribute anything to a project.

                                                                                                                                          • raincole 2 days ago

                                                                                                                                            I'm having a hard time trying to find what is satire here.