« BackSome things to expect in 2025lwn.netSubmitted by signa11 20 hours ago
  • kirubakaran 19 hours ago

    > A major project will discover that it has merged a lot of AI-generated code

    My friend works at a well-known tech company in San Francisco. He was reviewing his junior team member's pull request. When asked what a chunk of code did, the team member matter-of-factly replied "I don't know, chatgpt wrote that"

    • alisonatwork 14 hours ago

      I have heard the same response from junior devs and external contractors for years, either because they copied something from StackOverflow, or because they copied something from a former client/employer (popular one in China), or even because they just uncritically copied something from another piece of code in the same project.

      From the point of view of these sorts of developers they are being paid to make the tests go green or to make some button appear on a page that kindasorta does something in the vague direction of what was in the spec, and that's the end of their responsibility. Unused variables? Doesn't matter. Unreachable code blocks? Doesn't matter. Comments and naming that have nothing to do with the actual business case the code is supposed to be addressing? Doesn't matter.

      I have spent a lot of time trying to mentor these sorts of devs and help them to understand why just doing the bare minimum isn't really a good investment in their own career not to mention it's disrespectful of their colleagues who now need to waste time puzzling through their nonsense and eventually (inevitably) fixing their bugs... Seems to get through about 20% of the time. Most of the rest of the time these folks just smile and nod and continue not caring, and companies can't afford the hassle of firing them, then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasant who happens to take pride in their work.

      Sorry, got a little carried away. Anywho, the point is LLMs are just another tool for these folks. It's not new, it's just worse now because of the mixed messaging where executives are hyping the tech as a magical solution that will allow them to ship more features for less cost.

      • KronisLV 11 hours ago

        > I have spent a lot of time trying to mentor these sorts of devs and help them to understand why just doing the bare minimum isn't really a good investment in their own career not to mention it's disrespectful of their colleagues who now need to waste time puzzling through their nonsense and eventually (inevitably) fixing their bugs... Seems to get through about 20% of the time. Most of the rest of the time these folks just smile and nod and continue not caring, and companies can't afford the hassle of firing them, then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasant who happens to take pride in their work.

        For them, this clearly sound like personal success.

        There's also a lot of folks who view programming just as a stepping stone in the path to becoming well paid managers and couldn't care any less about all of the stuff the nerds speak about.

        Kind of unfortunate, but oh well. I also remember helping out someone with their code back in my university days and none of it was indented, things that probably shouldn't be on the same line were and their answer was that they don't care in the slightest about how it works, they just want it to work. Same reasoning.

        • anal_reactor 10 hours ago

          I used to be fascinated about computers, but then I understood that being a professional meeting attender pays more for less effort.

          • KronisLV 10 hours ago

            I still like it, I just acknowledge that being passionate isn't compatible with the corpo culture.

            Reminds me of this: https://www.stilldrinking.org/programming-sucks

            • epiccoleman 4 hours ago

              That is an all time favorite that I've come back to many times over the years. It's hard to choose just one quote, but this one always hits for me:

              > You are an expert in all these technologies, and that’s a good thing, because that expertise let you spend only six hours figuring out what went wrong, as opposed to losing your job.

            • oblio 6 hours ago

              Pays more for less effort and frequently less risk. Just make sure to get enough headcount to go over the span of control number.

          • oytis 11 hours ago

            > Most of the rest of the time these folks just smile and nod and continue not caring, and companies can't afford the hassle of firing them, then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasant who happens to take pride in their work.

            Wow. I am probably very lucky, but most of managers, and especially architects I know are actually also exceptional engineers. A kind of exception was a really nice, helpful and proactive guy who happened to just not be a great engineer. He was still very useful for being nice, helpful and proactive, and was being promoted for that. "Failing up" to management would actually make a lot of sense for him, unfortunately he really wanted to code though.

            • arkh 12 hours ago

              What you describe is the state of most devops.

              Copy / download some random piece of code, monkey around to change some values for your architecture and up we go. It works! We don't know how, we won't be able to debug it when the app goes down but that's not our problem.

              And that's how you end up with bad examples or lack of exhaustive options in documentations, most tutorials being a rehash of some quickstart and people tell you "just use this helm chart or ansible recipe from some github repo to do what you want". What those things really install? Not documented. What can you configure? Check the code.

              Coming from the dev world it feels like the infrastructure ecosystem still lives in a tribal knowledge model.

              • whatevertrevor 11 hours ago

                I'm ashamed to say this is me with trying to get Linux to behave tbh.

                I like fully understanding my code and immediate toolchain, but my dev machine is kinda held together with duct tape it feels.

                • Cthulhu_ 10 hours ago

                  Oof, same to be honest. It doesn't help that at some point Apache changed its configuration format, and that all of these tools seem to have reinvented their configuration file format. And that, once it's up you won't have to touch it again for years (at least in my personal server use case, I've never done enterprise level ops work beyond editing a shell script or CI pipeline)

                • sofixa 10 hours ago

                  I disagree. A lot of DevOps is using abstractions, yes. But using a Terraform module to deploy your managed database without reading the code and checking all options is the same as using a random library without reading the code and checking all parameters in your application. People skimping on important things exist in all roles.

                  > people tell you "just use this helm chart or ansible recipe from some github repo to do what you want". What those things really install? Not documented. What can you configure? Check the code.

                  I mean, this is just wrong. Both Ansible roles and Helm charts have normalised documentations. Official Ansible modules include docs with all possible parameters, and concrete examples how they work together. Helm charts also come with a file which literally lists all possible options (values.yaml). And yes, checking the code is always a good idea when using third party code you don't trust. Which is it you're complaining about, that DevOps people don't understand the code they're running or that you have to read the code? It can't be both, surely.

                  > Coming from the dev world it feels like the infrastructure ecosystem still lives in a tribal knowledge model.

                  Rose tinted glasses, and bias. You seem to have worked only with good developer practices (or forgotten about the bad), and bad DevOps ones. Every developer fully understands React or the JS framework du jour they're using because it's cool? You've never seen some weird legacy code with no documentation?

                  • arkh 9 hours ago

                    > Rose tinted glasses, and bias. You seem to have worked only with good developer practices (or forgotten about the bad), and bad DevOps ones. Every developer fully understands React or the JS framework du jour they're using because it's cool? You've never seen some weird legacy code with no documentation?

                    Not really. I'm mainly in code maintenance so good practices are usually those the team I join can add to old legacy projects. Right now trying to modernize a web of 10-20 old add-hoc apps. But good practices are known to exist and widely shared even between dev ecosystems.

                    For everything ops and devops it looks like there are like islands of knowledge which are not shared at all. At least when coming with a newbie point of view. Like for example with telemetry: people who worked at Google or Meta all rave about the mythical tools they got to use in-house and how they cannot find anything equivalent outside... and yes when you check what is available "outside" it looks less powerful and all those solutions feel like the same. So you got the FAANG islands of tools and way to do things, the big box commercial offering and their armies of consultants and then the OpenSource and Freemium way of doing telemetry.

                    • sofixa 8 hours ago

                      > For everything ops and devops it looks like there are like islands of knowledge which are not shared at all

                      Very strongly disagree, if anything it's the opposite. Many people read the knowledge shared by others and jump to thinking it's suitable for them as well. Microservices and Kubernetes got adopted by everyone and their grandpa because big tech uses them, without any consideration if its suitable or not for each org.

                      > At least when coming with a newbie point of view. Like for example with telemetry: people who worked at Google or Meta all rave about the mythical tools they got to use in-house and how they cannot find anything equivalent outside... and yes when you check what is available "outside" it looks less powerful and all those solutions feel like the same. So you got the FAANG islands of tools and way to do things, the big box commercial offering and their armies of consultants and then the OpenSource and Freemium way of doing telemetry.

                      The latter two are converging with OpenTelemetry and Prometheus and related projects. Both ways are well documented, and there are a number of projects and vendors providing alternatives and various options. People can pick what works best for them (and it could very well be open source but hosted for you, cf. Grafana Cloud). I'm not sure how that's related to "islands of knowledge"... observability in general is one of the most widely discussed topics in the space.

                • beAbU 11 hours ago

                  Do other companies not have static analysis integrated into the CI/CD pipeline?

                  We by default block any and all PRs that contain funky code: high cyclomatic complexity, unused variables, bad practise, overt bugs, known vulnerabilities, inconsistent style, insufficient test coverage, etc.

                  If that code is not pristine, it's not going in. A human dev will not even begin the review process until at least the static analysis light is green. Time is then spent mentoring the greens as to why we do this, why it's important, and how you can get your code to pass.

                  I do think some devs still use AI tools to write code, but I believe that the static analysis step will at least ensure some level of forced ownership over the code.

                  • liontwist 8 hours ago

                    I think it’s a good thing to use such tools. But no amount of tooling can create quality.

                    It gives you an illusion of control. Rules are a cheap substitute for thinking.

                    • ericmcer 4 hours ago

                      That is a softball question for an AI: this block of code is throwing these errors, can you tell me why?

                      • lrem 11 hours ago

                        Just wait till AI learns how to pass your automated checks, without getting any better in the semantics. Unused variables bad? Let’s just increment/append whatever every iteration, etc.

                        • whatevertrevor 11 hours ago

                          And then we'll need AI tools to diagnose and profile AI generated code to automagically improve performance.

                          I can't wait to retire.

                      • quietbritishjim 8 hours ago

                        It's definitely worse for LLMs than for StackOverflow. You don't need to fully understand a StackOverflow answer, but you at least need to recognise if the question could be applicable. With LLMs, it makes the decisions completely for you, and if it doesn't work you can even get it to figure out why for you.

                        I think young people today are at severe risk of building up what I call learning debt. This is like technical debt (or indeed real financial debt). They're getting further and further, through university assignments and junior dev roles, without doing the learning that we previously needed to. That's certainly what I've seen. But, at some point, even LLMs won't cut it for the problem they're faced with and suddenly they'll need to do those years of learning all at once (i.e. the debt becomes due). Of course, that's not possible and they'll be screwed.

                        • ben_w 7 hours ago

                          > With LLMs, it makes the decisions completely for you, and if it doesn't work you can even get it to figure out why for you.

                          To an extent. The failure modes are still weird, I've tried this kind of automation loop manually to see how good it is, and while it can as you say produce functional mediocre code*… it can also get stuck in stupid loops.

                          * I ran this until I got bored; it is mediocre code, but ChatGPT did keep improving the code as I wanted it to, right up to the point of boredom: https://github.com/BenWheatley/JSPaint

                        • bryanrasmussen 13 hours ago

                          >Unused variables? Doesn't matter. Unreachable code blocks? Doesn't matter. Comments and naming that have nothing to do with the actual business case the code is supposed to be addressing? Doesn't matter.

                          maybe I am just supremely lucky but while I have encountered people like (in the coding part) it is somewhat rare from my experience. These comments on HN always makes it seem like it's at least 30% of the people out there.

                          • alisonatwork 13 hours ago

                            I think even though these types of developers are fairly rare, they have a disproportionate negative impact on the quality of the code and the morale of their colleagues, which is perhaps why people remember them and talk about it more often. The p95 developers who are more-or-less okay aren't really notable enough to be worth complaining about on HN, since they are us.

                            • ryandrake 12 hours ago

                              And, as OP alluded to, I bet these kinds of programmers tend to “fail upward” and disproportionately become eng managers and directors, spreading their carelessness over a wider blast radius, while the people who care stagnate as perpetual “senior software engineers”.

                              • bryanrasmussen 10 hours ago

                                maybe they care more about the quality as they become managers etc. quality takes effort, maybe they don't like taking the effort but like making other people take the effort.

                          • redeux 7 hours ago

                            > Most of the rest of the time these folks just smile and nod and continue not caring, and companies can't afford the hassle of firing them, then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasant who happens to take pride in their work.

                            I’ve heard this sentiment several times over the years and what I think a lot of people don’t realize is that they’re just playing a different game than you. Their crappy code is a feature not a bug because they’re expending the energy on politics rather than coding. In corporations politics is a form a work, but it’s not work that many devs want to do. So people will say the uncaring dev is doing poor work, but really they’re just not seeing the real work being done.

                            I’m not saying this is right or wrong, it’s just an observation. Obviously this isn’t true for everyone who does a poor job, but if you see that person start climbing the ladder, that’s the reason.

                            • stcroixx 5 hours ago

                              The kind of work you're describing doesn't benefit the company, it benefits the individual. It's not what they were hired to do. The poor quality code they produce can be a net negative when it causes bugs, maintenance issues, etc. I think it's always the right choice to boot such a person from any company once they've been identified.

                            • ojbyrne 13 hours ago

                              I have been told (at a FAANG) not to fix those kind of code smells in existing code. “Don’t waste time on refactoring.”

                              • dawnerd 11 hours ago

                                To be fair sometimes it just isn’t worth the companies time.

                              • ben_w 7 hours ago

                                > I have spent a lot of time trying to mentor these sorts of devs and help them to understand why just doing the bare minimum isn't really a good investment in their own career not to mention it's disrespectful of their colleagues who now need to waste time puzzling through their nonsense and eventually (inevitably) fixing their bugs... Seems to get through about 20% of the time.

                                I've seen that, though fortunately only in one place. Duplicated entire files, including the parts to which I had added "TODO: deduplicate this function" comments, rather than change access specifiers from private to public and subclass.

                                By curious coincidence, 20% was also roughly the percentage of lines in the project which were, thanks to him, blank comments.

                                • devsda 13 hours ago

                                  > then you open LinkedIn years later and turns out somehow they've failed up to manager, architect or executive while you're still struggling along as a code peasant

                                  That's because they come across as result oriented, go getter kind of persons while the others will be seen as uptight individuals. Unfortunately, management for better or worse self selects the first kind.

                                  LLMs are only going to make it worse. If you can write clean code in half a day and an LLM can generate a "working" sphagetti mess in few mins, management will prefer the mess. This will be the case for many organizations where software is just an additional supporting expense and not critical part of the main business.

                                  • ChrisMarshallNY 5 hours ago

                                    I have incorporated a lot of SO code. I never incorporate it, until I understand exactly what it does.

                                    I usually learn it, by adapting it to my coding style, and documenting it. I seldom leave it untouched. I usually modify in one way or another, and I always add a HeaderDoc comment, linking to the SO answer.

                                    So far, I have not been especially thrilled with the AI-generated code that I've encountered. I expect things to improve, rapidly, though.

                                    • 0xEF 9 hours ago

                                      The LLMs are not just another tool for these folks, but for folks who should not be touching code at all. That's the scary part. In my field (industrial automation), I have had to correct issues three times now in the ladder logic on a PLC that drives an automation cell that can definitely kill or hurt someone in the right circumstances (think maintenance/repair). When asked where the logic came from, they showed me the tutorials they feed to their LLM of choice to "teach" it ladder logic, then had it spit out answers to their questions. Safety checks were missed, needless to say, which thankfully only broke the machines.

                                      These are young controls engineers at big companies. I won't say who, but many of you probably use one of their products to go to your own job.

                                      I am not against using LLMs as a sort of rubber duck to bounce ideas off of or maybe get you thinking in a different directions for the sake of problem solving, but letting them do the work for you and not understanding how to check the validity of that work is maddeningly dangerous in some situations.

                                      • Cthulhu_ 10 hours ago

                                        You can lead a horse to water, etc. What worked for me wasn't so much a mentor telling me xyz was good / bad, but metrics and quality gates - Sonar (idk when it was renamed to sonarqube or what the difference is) will flag up these issues and simply make the merge request unmergeable unless the trivial issues are fixed.

                                        Because that's the frustrating part; they're trivial issues, unreachable code and unused variables are harmless (on paper), just a maintenance burden and frustrating for whoever has to maintain it later on. But because they're trivial, the author doesn't care about them either. Trivial issues should be automatically fixed and / or checked by tooling, it shouldn't cost you (the reviewer) any headspace in the first place. And it shouldn't need explanation or convincing to solve either. Shouldn't, but here we are.

                                        But yeah, the next decade will be interesting. I'm not really using it in my code yet because idk, the integration broke again or I keep forgetting it exists. But we integrated a tool in our gitlab that generates a code review, both summarzing the changes and highlighting the risks / issues if any. I don't like that, but the authors of merge requests aren't writing proper merge request descriptions either, so I suppose an AI generated executive summary is better than nothing.

                                        • svilen_dobrev 4 hours ago

                                          > failed up to manager...

                                          see, everything around can be a tool. Sticks, screwdrivers, languages, books, phones, cars, houses, roads, software, knowledge, ..

                                          in my rosy glasses this line stops at people (or maybe life-forms?). People are not tools. Should not be treated as such.

                                          But that is not the case in reality. So anyone for whom other people are tools, will fail (or fall) upwards (or will be pulled there). Sooner or later.

                                          sorry if somewhat dark..

                                        • Taylor_OD 17 hours ago

                                          This is more of an early career engineer thing than a ChatGPT thing. 'I don't know, I found it on stackoverflow' could have easily been the answer for the last ten years.

                                          • devsda 16 hours ago

                                            The main problem is not the source of solution but not making an effort to understand the code they have put in.

                                            The "I don't know" might as well be "I don't care".

                                            • arkh 12 hours ago

                                              That's where you'd like your solution engine to be able to tell you how to get the solution it is giving you. Something good answers on Stack Overflow will do: links to the relevant documentation, steps you can go through to get a better diagnostic of your problem etc.

                                              Get the fire lit with the explanation of where to get wood and how to light it in your condition so next time you don't need to consult you solution engine.

                                            • Vampiero 10 hours ago

                                              No, a real engineer goes on SO to understand. A junior goes on SO to copy and paste. If your answer is "I don't know I just copied" you're not doing any engineering and it's awful to pretend you are. Our job is literally about asking "why" and "how" until we don't need to anymore because our pattern matching skills allow us to generalize.

                                              At this point in my career I rarely ever go to SO, and when I do it's because of some obscure thing that 7 other people came across and decided to post a question about. Or to look up "how to do the most basic shit in language I am not familiar with", but that role was taken over by LLMs.

                                              • mrweasel 11 hours ago

                                                There's nothing inherently wrong with getting help from either and LLM, or StackOverflow, it's the "I don't know' part that bothers me.

                                                One the funnier reactions to "I got it from StackOverflow" is the followup question "From the question or the answers?"

                                                If you just adds code, without understanding how it works, regardless of where it came from and potential licensing issues, then I question your view on programming. If I have a paint come in and paint my house and get paint all over the place, floors, windows, electrical socket but still get the walls the color I want, then I wouldn't consider that person a professional painter.

                                                • sebazzz 4 hours ago

                                                  The LLM also tends to do a good bit of the integrations of the code in your codebase. With SO you need to do it yourself, so you at least need to understand the outer boundary of the code. And on StackOverflow it often has undergone some form of peer review. The LLM just outputs without any bias or footnote.

                                                • DowsingSpoon 16 hours ago

                                                  I am fairly certain that if someone did that where I work then security would be escorting them off the property within the hour. This is NOT Okay.

                                                  • bitmasher9 15 hours ago

                                                    Where I work we are actively encouraged to use more AI tools while coding, to the point where my direct supervisor asked why my team’s usage statistics were lower than company average.

                                                    • dehrmann 15 hours ago

                                                      It's not necessarily the use of AI tools (though the license parts are an issue), is that someone submitted code for review without knowing how it works.

                                                      • johnisgood 11 hours ago

                                                        I use AI these days and I know how things work, there really is a huge difference. It helps me make AI write me code faster and the way I want it, something I could do, except more slowly.

                                                        • xiasongh 15 hours ago

                                                          Didn't people already do that before, copy and pasting code off stack overflow? I don't like it either but this issue has always existed, but perhaps it is more common now

                                                          • hackable_sand 14 hours ago

                                                            Maybe it's because I'm self-taught, but I have always accounted for every line I push.

                                                            It's insulting that companies are paying people to cosplay as programmers.

                                                            • ascorbic 12 hours ago

                                                              It's probably more common among self-taught programmers (and I say that as one myself). Most go through the early stage of copying chunks of code and seeing if they work. Maybe not blindly copying it, but still copying code from examples or whatever. I know I did (except it was 25 years ago from Webmonkey or the php.net comments section rather than StackOverflow). I'd imagine formally-educated programmers can skip some (though not all) of that by having to learn more of the theory at first.

                                                              • hackable_sand 10 hours ago

                                                                If people are being paid to copy and run random code, more power to them. I wouldn't have dreamt of getting a programming job until I was literate.

                                                              • guappa 12 hours ago

                                                                I've seen self taught and graduates alike do that.

                                                              • noisy_boy 13 hours ago

                                                                Now there is even lesser excuse for not knowing what it does, because the same chatGPT that gave you the code, can explain it too. That wasn't a luxury available in copy/paste-from-StackOverflow days (though explanations with varying degrees of depth were available there too).

                                                                • ascorbic 12 hours ago

                                                                  Yes, and I think the mistakes that LLMs commonly make are less problematic than Stack Overflow. LLMs seem to most often either hallucinate APIs, or use outdated ones. They're easier to detect when they just don't work. They're not perfect, but seem less inclined to generate the bad practices and security holes that are the bread and butter of Stack Overflow. In fact they're pretty good at identifying those sort of problems in existing code.

                                                                • rixed 14 hours ago

                                                                  Or importing a new library that's not been audited. Or compile it with a compiler that's not been audited? Or run it on silicon that's not been audited?

                                                                  We can draw the line in many places.

                                                                  I would take generated code that a rookie obtained from an llm and copied without understanding all of it, but that he has thoughtfully tested, over something he authored himself and submitted for review without enough checks.

                                                                  • yjftsjthsd-h 12 hours ago

                                                                    > We can draw the line in many places.

                                                                    That doesn't make those places equivalent.

                                                                    • whatevertrevor 11 hours ago

                                                                      That's a false dichotomy. People can write code themselves and thoroughly test it too.

                                                                  • masteruvpuppetz 15 hours ago

                                                                    I think we should / have already reached to a place where AI written code is acceptable.

                                                                    • bigstrat2003 15 hours ago

                                                                      Whether it's acceptable or not to submit AI code, it is clearly unacceptable to submit code that you don't even understand. If that's all an employee is capable of, why on earth would the employer pay them software engineer salary versus hiring someone to do the exact same for minimum wage?

                                                                      • userbinator 14 hours ago

                                                                        Or even replace them with the AI directly.

                                                                      • dpig_ 15 hours ago

                                                                        What a god awful thing to hear.

                                                                        • bsder 13 hours ago

                                                                          The problem is that "AI" is likely whitewashing the copyright from proprietary code.

                                                                          I asked one of the "AI" assistants to do a very specific algorithmic problem for me and it did. And included unit tests which just so happened to hit all the exact edge cases that you would need to test for with the algorithm.

                                                                          The "AI assistant" very clearly regurgitated the code of somebody. I, however, couldn't find a particular example of that code no matter how hard I searched. It is extremely likely that the regurgitated code was not open source.

                                                                          Who is liable if I incorporate that code into my product?

                                                                          • guappa 12 hours ago

                                                                            According to microsoft: "the user".

                                                                            There's companies that scan code to see if it matches known open source code or not. However they probably just scan github so they won't even have a lot of the big projects.

                                                                            • kybernetikos 12 hours ago

                                                                              This seems like you don't believe that AI can produce correct new work, but it absolutely can.

                                                                              I've no idea whether in this case it directly copied someone else's work, but I don't think that it writing good unit tests is evidence that it did - that's it doing what it was built to do. And you searching and failing to find a source is weak evidence that it did not.

                                                                      • bigstrat2003 15 hours ago

                                                                        To be fair I don't think someone should get fired for that (unless it's a repeat offense). Kids are going to do stupid things, and it's up to the more experienced to coach them and help them to understand it's not acceptable. You're right that it's not ok at all, but the first resort should be a reprimand and being told they are expected to understand code they submit.

                                                                        • LastTrain 14 hours ago

                                                                          Kids, sure. University trained professional and paid like one? No.

                                                                          • raverbashing 14 hours ago

                                                                            You're having high expectations of the current batch of college graduates

                                                                            (and honestly it's not like the past graduates were much better, but they didn't have chatgpt)

                                                                            • The_Colonel 13 hours ago

                                                                              A cynical take would be that the current market conditions allow you to filter out such college graduates and only take the better ones.

                                                                              • solatic 11 hours ago

                                                                                And how do you propose filtering them out? There's a reason why college students are using LLMs, they're getting better grades for less effort. I don't assume you're proposing selecting students with worse grades on purpose?

                                                                                • The_Colonel 11 hours ago

                                                                                  I wouldn't hire based on grades.

                                                                                  I think what the junior did is a reason to fire them (then you can try again with better selection practices). Not because they use code from LLMs, but that they don't even try to understand what it is doing. This says a lot about their attitude to programming.

                                                                                  • LastTrain 5 hours ago

                                                                                    One way to filter them out, relevant to this thread, would be to let them go if they brazenly turned in work they did not create and do not understand.

                                                                            • DowsingSpoon 14 hours ago

                                                                              I understand the point you’re trying to get across. For many kinds of mistakes, I agree it makes good sense to warn and correct the junior. Maybe that’s the case here. I’m willing to concede there’s room for debate.

                                                                              Can you imagine the fallout from this, though? Each and every line of code this junior has ever touched needs to be scrutinized to determine its provenance. The company now must assume the employee has been uploading confidential material to OpenAI too. This is an uncomfortable legal risk.

                                                                              How could you trust the dev again after the dust is settled?

                                                                              Also, it raises further concerns for me that this junior seems to be genuinely, honestly unaware that using ChatGPT to write code wouldn’t at least be frowned upon. That’s a frankly dangerous level of professional incompetence. (At least they didn’t try to hide it.)

                                                                              Well now I’m wondering what the correct way would be to handle a junior doing this with ChatGPT, and what the correct way would be to handle similar kinds of mistakes such as copy-pasting GPL code into the proprietary code base, copy-pasting code from Stack Overflow, sharing snippets of company code online, and so on.

                                                                              • manmal 13 hours ago

                                                                                > The company now must assume the employee has been uploading confidential material to OpenAI too.

                                                                                If you think that’s not already the case for most of your codebase, you might be in for a rough awakening.

                                                                                • thaumasiotes 14 hours ago

                                                                                  > Also, it raises further concerns for me that this junior seems to be genuinely, honestly unaware that using ChatGPT to write code wouldn’t at least be frowned upon.

                                                                                  Austen Allred is selling this as the future of programming. According to him, the days of writing code into an IDE are over.

                                                                                  https://www.gauntletai.com/

                                                                                  • manmal 13 hours ago

                                                                                    Responding to the link you posted: Apparently, the future of programming is 100 hour weeks? Naive me was thinking we could work less and think more with these new tools at our disposal.

                                                                                    • ojbyrne 13 hours ago

                                                                                      Also you think with their fancy AI coding they could update their dates to the future or at least disable the page for a past dated session.

                                                                                      • guappa 12 hours ago

                                                                                        Seems people didn't read the link and are downvoting you, possibly because they don't understand what you're talking about.

                                                                                        • manmal 12 hours ago

                                                                                          Thanks, added context.

                                                                                      • whatevertrevor 11 hours ago

                                                                                        Without prior knowledge, that reads like a scam?

                                                                                        A free training program with a promise of a guaranteed high paying job at the end, where have I heard that before? Seems like their business model is probably to churn people through these sessions and then monetize whatever shitty chatbot app they build through the training.

                                                                                      • guappa 13 hours ago

                                                                                        I've seen seniors and above do that.

                                                                                        They never cared about respecting software licenses until Biden said they must. Then they started to lament and cry.

                                                                                        • ujkiolp 13 hours ago

                                                                                          unless you work for hospitals or critical infrastructure, this reaction is overblown and comical

                                                                                      • phinnaeus 15 hours ago

                                                                                        Are you hiring?

                                                                                        • userbinator 14 hours ago

                                                                                          In such an environment, it would be more common for access to ChatGPT (or even most of the Internet) to be blocked.

                                                                                          • dyauspitr 13 hours ago

                                                                                            Why? I encourage all my devs to use AI but they need to be able to explain what it does.

                                                                                          • ben_w 7 hours ago

                                                                                            > He was reviewing his junior team member's pull request. When asked what a chunk of code did, the team member matter-of-factly replied "I don't know, chatgpt wrote that"

                                                                                            I remember being a junior nearly 20 years back, a co-worker someone asked me how I'd implemented an invulnerability status, and I said something equally stupid despite knowing perfectly well how I'd implemented it and there not being any consumer grade AI more impressive than spam filters and Office's spelling and grammar checking.

                                                                                            Which may or may not be relevant to the example of your friend's coworker, but I do still wonder how much of my answers as a human are on auto-complete. It's certainly more than none, and not just from that anecdote… https://duckduckgo.com/?t=h_&q=enjoy+your+meal+thanks+you+to...

                                                                                            • rixrax 10 hours ago

                                                                                              Essentially you're paying human to be a proxy between the requirements, LLM and codebase. Some people I'm talking to lament having to pay top dollar to their junior (and other kinds I'm sure) devs for this, but I think this is and will be the new reality and new normal. And instead we should start thinking how to make best of it, and how to help maximize success for these devs.

                                                                                              Few decades down the road though we are likely to be viewing this current situation similar to how we're looking at 'human computers'[0] of yesteryear.

                                                                                              [0] https://en.wikipedia.org/wiki/Women_in_computing

                                                                                              • ErrantX 12 hours ago

                                                                                                Feels like a controls failure as much as anything else. Any decently sized company that allows unrestricted access to llms, well that's going to be the tip of the iceberg.

                                                                                                Also, the culture of don't care comes from somewhere, not ChatGPT

                                                                                                • stcroixx 5 hours ago

                                                                                                  This is the norm on my majority H1B team. Nobody sees anything wrong with it but me so I stopped caring too.

                                                                                                  • gunian 16 hours ago

                                                                                                    the saddest part is if i wrote the code myself it would be worse lol GPT is coding at a intern level and as a dumb human being I feel sad I have been replaced but not as catastrophic as they made it seem

                                                                                                    it's interesting to see the underlying anxiety among devs though I think there is a place in the back of their minds that knows the models will get better and better and someday could get to staff engineer level

                                                                                                    • nozzlegear 16 hours ago

                                                                                                      I don't think that's the concern at all. The concern (imo) is that you should at least understand what the code is doing before you accept it verbatim and add it to your company's codebase. The potential it has to introduce bugs or security flaws is too great to just accept it without understanding it.

                                                                                                      • dataviz1000 15 hours ago

                                                                                                        I've been busy with a personal coding project. Working through problems with a LLM, which I haven't used professionally yet, has been great. Countless times in the past I've spent hours pouring over Stack Overflow and Github repository code looking for solutions. Quite often I would have to solve them myself and would always post the answer a day or two later below my question on Stack Overflow. A big milestone for a software engineer is getting to the point where any difficult problem can't be solved with internet search, asking colleagues, or asking the question no matter how well written and detailed on Stack Overflow because the problems are esoteric -- the edge of innovation is solitude. Today I give the input to the LLM, tell it what the output should be, and magically a minute later it is solved. I was thinking today about how long it has been since I was stuck and stressed on a problem. With this personal project, I'm prototyping and doing a lot of experimentation so having a LLM saves a ton of time keeping the momentum at a fast pace. The iteration process is a little different with frequent stop, refactor, cleanup, make the code consistent, and log the input and output to console to verify.

                                                                                                        Perhaps take intern's LLM code and have the LLM do the code review. Keep reviewing the code with the LLM until the intern gets it correct.

                                                                                                        • nozzlegear 13 hours ago

                                                                                                          My experience with LLMs and code generation is usually the opposite, even using ChatGPT and the fancy o1 model. Maybe it's because I write a lot of F#, and the training data for that is probably low. When I'm not writing F#, then I like to write functional-style code. But either way, nine times out of ten I'm only using LLMs for "rubber ducking," as the code they give me usually falls flat on its face with obvious compiler errors.

                                                                                                          I do agree that I feel much more productive with it LLMs though. Just being able to rubber duck my ideas with an AI and talk about code is extremely helpful, especially because I'm a solo dev/freelancer and don't usually have anyone else to do that with. And even though I don't typically use the code they give me, it's still helpful to see what the AI is thinking and explore that.

                                                                                                          • dataviz1000 12 hours ago

                                                                                                            I have had similar experiences using less popular libraries. My favorite state machine library released a new version a year ago and the LLMs, regardless of prompts telling them not to, will always use the old API. I find the LLMs are worthless when organizing ideas across multiple files. And, worst, they are not by their nature capable of consistency.

                                                                                                            On the other hand, I use d3.js for data visualization which has had a stable API for years, has likely hundreds of thousands of examples that are small contained in a single file, and has many blog posts, O'Reilly books, and tutorials. The LLMs create perfect data visualizations that are high quality. Any request to change one such as adding dynamic sliders or styling tooltips, for example, are done without errors or bugs. People who do data visualization likely will be the first to go. :(

                                                                                                            I am concerned that new libraries will not gain traction because the LLMs haven't been trained to implement them. We will be able to implement all the popular libraries, languages, and techniques quickly, however, innovation might stall if we rely on these machines stuck in the past.

                                                                                                        • gunian 15 hours ago

                                                                                                          Exactly why devs are getting the bug bucks

                                                                                                          that is right now at some point what if someone figures out a way to make it deterministic and able to write code without bugs?

                                                                                                          • eggnet 15 hours ago

                                                                                                            Then the programming language becomes natural language and you’ll have to be very good at describing what you want. Unless you are talking about AGI, aka, the singularity. Which is a whole other topic.

                                                                                                            • gunian 14 hours ago

                                                                                                              not AGI at that point all human jobs can be replaced that's my personal bar at least

                                                                                                              i'm thinking like models get small enough, you fine tune them on your code, you add fuzzing, rewriting

                                                                                                              it may not be bug free but could it become self healing with minimal / known natural language locations? or instead of x engineers one feeds the skeleton to chatgpt 20 or something and instead of giving you the result immediately it does it iteratively would still be cheaper than x devs

                                                                                                            • hackable_sand 14 hours ago

                                                                                                              You cannot write code without bugs.

                                                                                                              • manmal 13 hours ago

                                                                                                                I‘d say, you cannot write _interesting_ code without bugs.

                                                                                                                • hackable_sand 13 hours ago

                                                                                                                  You know what

                                                                                                                  One man's bug is another man's feature

                                                                                                                  • jononor 12 hours ago

                                                                                                                    Sure. And some of those people are black hats ;)

                                                                                                                    • gunian 12 hours ago

                                                                                                                      modern freedom fighters Abe Lincoln couldn't compare :)

                                                                                                            • chrisweekly 15 hours ago

                                                                                                              "AI is the payday loan* of tech debt".

                                                                                                            • jahewson 15 hours ago

                                                                                                              ChatGPT needs two years of exceeds expectations for before that can happen.

                                                                                                              • gunian 15 hours ago

                                                                                                                I been writing at troll level since i first got my computer at 19 so it looks like exceeds expectations to me lol

                                                                                                              • dyauspitr 13 hours ago

                                                                                                                It’s coding way, way above intern level. Honestly it’s probably a mid level.

                                                                                                              • userbinator 17 hours ago

                                                                                                                At least he's honest.

                                                                                                                • BiteCode_dev 10 hours ago

                                                                                                                  I'm a strong proponent of using LLM and use them extensively.

                                                                                                                  But this is a fireable offense in my book.

                                                                                                                  • ghxst 13 hours ago

                                                                                                                    Was this a case of something along the lines of an isolated function that had a bunch of bit shifting magic for some hyper optimization that was required, or was it just regular code?

                                                                                                                    Not saying it's acceptable, but the first example is maybe worth a thoughtful discussion while the latter would make me lose hope.

                                                                                                                    • johnisgood 11 hours ago

                                                                                                                      There is no shame, damn.

                                                                                                                      • undefined 11 hours ago
                                                                                                                        [deleted]
                                                                                                                      • deadbabe 18 hours ago

                                                                                                                        I hope that junior engineer was reprimanded or even put on a PIP instead of just having the reviewer say lgtm and approve the request.

                                                                                                                        • WaxProlix 18 hours ago

                                                                                                                          Probably depends a lot on the team culture. Depending on what part of the product lifecycle you're on (proving a concept, rushing to market, scaling for the next million TPS, moving into new verticals,...) and where the team currently is, it makes a lot of sense to generate more of the codebase by AI. Write some decent tests, commit, move on.

                                                                                                                          I wish my reports would use more AI tools for parts of our codebase that don't need a high bar of scrutiny, boilerplate at enterprise scale is a major source of friction and - tbh - burnout.

                                                                                                                          • not2b 17 hours ago

                                                                                                                            Unless the plan is to quickly produce a prototype that will be mostly thrown away, any code that gets into the product is going to generate far more work maintaining it over the lifetime of a product than the cost to code it in the first place.

                                                                                                                            As a reviewer I'd push back, and say that I'll only be able to approve the review when the junior programmer can explain what it does and why it's correct. I wouldn't reject it solely because chatgpt made it, but if the checkin causes breakage it normally gets assigned back to the person who checked it in, and if that person has no clue we have a problem.

                                                                                                                            • solatic 5 hours ago

                                                                                                                              Not being willing to throw out bad/unused features is a different trap that organizations fall into. The amount of work that goes into, shall we say fortifying the foundations of a particular feature, ideally should be proportional to how much revenue that feature is responsible for. Test code also has to be maintained, and increasing the maintenance burden on something that has its own maintenance burden when customers don't even like it is shortsighted at the very least.

                                                                                                                              • KronisLV 11 hours ago

                                                                                                                                > I wouldn't reject it solely because chatgpt made it, but if the checkin causes breakage it normally gets assigned back to the person who checked it in, and if that person has no clue we have a problem.

                                                                                                                                That's a fair point, but regardless of who wrote the code (or what tools were used) it should also probably be as clear as possible to everyone who reads it, because chances are that at some point that person will be elsewhere and some other person will have to take over.

                                                                                                                                • not2b 33 minutes ago

                                                                                                                                  True, but you're talking about the difference between "only one person understands this, that's a risk!" and "zero people understand this".

                                                                                                                              • bradly 17 hours ago

                                                                                                                                Yes and the team could be missing structures to support junior engineers. What made them not ask for help or pairing is really important to dig in to and I would expect a senior manager to understand this and be introspective on what environment they have created where this human made this choice.

                                                                                                                                • undefined 16 hours ago
                                                                                                                                  [deleted]
                                                                                                                                • GeoAtreides 8 hours ago

                                                                                                                                  > Write some decent tests, commit, move on.

                                                                                                                                  Move on to what?! Where does a junior programmer who doesn't understand what the code does moves on to?

                                                                                                                                • XorNot 17 hours ago

                                                                                                                                  I mean if that was an answer I got given by a junior during a code review the next email I'd be sending would be to my team lead about it.

                                                                                                                                • sofixa 10 hours ago

                                                                                                                                  I have a better one, a senior architect who wrote a proposal for a new piece of documentation, and when asked about his 3 main topics in the doc and why them, said "LLM said those are the main ones". The rest of the doc was obviously incoherent LLM soup as well.

                                                                                                                                  • ginko 7 hours ago

                                                                                                                                    >When asked what a chunk of code did, the team member matter-of-factly replied "I don't know, chatgpt wrote that"

                                                                                                                                    That'd be an immediate -2 from me.

                                                                                                                                    • hardbants 18 hours ago

                                                                                                                                      [dead]

                                                                                                                                    • 1vuio0pswjnm7 an hour ago

                                                                                                                                      "The OpenWrt One, which hit the market in 2024, quickly sold out its initial production run."

                                                                                                                                      But have its distributors sold out their inventory from this initial production run.

                                                                                                                                      https://www.aliexpress.us/item/3256807609464530.html?spm=526...

                                                                                                                                      • christina97 20 hours ago

                                                                                                                                        > A major project will discover that it has merged a lot of AI-generated code, a fact that may become evident when it becomes clear that the alleged author does not actually understand what the code does.

                                                                                                                                        Not to detract from this point, but I don’t think I understand what half the code I have written does if it’s been more than a month since I wrote it…

                                                                                                                                        • WaitWaitWha 19 hours ago

                                                                                                                                          I am confident that you do understand it at time of writing.

                                                                                                                                          > We depend on our developers to contribute their own work and to stand behind it; large language models cannot do that. A project that discovers such code in its repository may face the unpleasant prospect of reverting significant changes.

                                                                                                                                          At time of writing and commit, I am certain you "stand behind" your code. I think the author refers to the new script kiddies of the AI time. Many do not understand what the AI spits out at time of copy/paste.

                                                                                                                                          • ozim 19 hours ago

                                                                                                                                            Sounds a lot like bashing copy pasting from StackOverflow. So also like old argument “kids these days”.

                                                                                                                                            No reasonable company pipes stuff directly to prod you still have some code review an d QA. So doesn’t matter if you copy from SO without understanding or LLM generates code that you don’t understand.

                                                                                                                                            Both are bad but still happen and world didn’t crash.

                                                                                                                                            • bigstrat2003 15 hours ago

                                                                                                                                              > Sounds a lot like bashing copy pasting from StackOverflow.

                                                                                                                                              Which is also very clearly unacceptable. If you just paste code from SO without even understanding what it does, you have fucked up just as hard as if you paste code from an LLM without understanding it.

                                                                                                                                              • BenjiWiebe 18 hours ago

                                                                                                                                                LLM can generate a larger chunk of code then you'll find on SO, so I think it's a larger issue to have LLM code then copy-pasted SO code.

                                                                                                                                                • seanw444 16 hours ago

                                                                                                                                                  I also think that it would be a nightmare to properly review a large PR of exclusively AI code. If you take the time to understand what it's doing, and find as many little bugs and edge cases as possible, you may as well have just written it yourself.

                                                                                                                                                  • JadeNB 16 hours ago

                                                                                                                                                    > LLM can generate a larger chunk of code then you'll find on SO, so I think it's a larger issue to have LLM code then copy-pasted SO code.

                                                                                                                                                    It also generates code customized to your request, so there is temptation to avoid doing even the minimal work of "how do I turn this SO snippet into something that works with my program?"

                                                                                                                                                    • bryanrasmussen 13 hours ago

                                                                                                                                                      agreement here -

                                                                                                                                                      As a normal rule somebody copied code from SO after searching for - unique identifier generator in JavaScript - and the code that was the top answer might not be 100% understandable to them but most of it and it doesn't do anything that is extremely weird. When asked what does that bit of code do they probably say it's the unique id generator.

                                                                                                                                                      Somebody might ask AI write a login module in JavaScript, inside of that will be a unique identifier generator - what does that bit of code do when asked they reply hmm, not sure, it's from ChatGPT.

                                                                                                                                                    • thayne 12 hours ago

                                                                                                                                                      It's not very common for people to do drive-by pull requests that just copy code from Stack Overflow on open source projects. I've already started seeing that with LLM generated code. And yeah, hopefully the problems with it are caught, but it wastes the maintainers time and drives maintainer Burnout.

                                                                                                                                                      • bitmasher9 15 hours ago

                                                                                                                                                        > No reasonable company pipes stuff directly to prod

                                                                                                                                                        I’ve definitely worked at places where the time gap between code merge and prod deployment is less than an hour, and no human QA process occurs before code is servicing customers. This approach has risks and rewards, and is one of many reasonable approaches.

                                                                                                                                                        • stcroixx 5 hours ago

                                                                                                                                                          Yes, I've worked on small teams of highly experienced people where code reviews may only happen a couple times a year for the purpose of knowledge transfer. This is how I've seen it work on what I would consider the most critical and best performing code I've been exposed to. High volume, high stakes stuff in finance and health care.

                                                                                                                                                    • elcritch 18 hours ago

                                                                                                                                                      Well LLM generated code doesn't often work for non-trivial code or cases that aren't re-hashed a million times like fizzbuzz.

                                                                                                                                                      So I find it almost always requires going through the code to understand it in order to find "oh the LLM's statistical pattern matching made up this bit here".

                                                                                                                                                      I've been using Claude lately and it's pretty great for translating code from other languages. But in a few bits it just randomly swapped to variables or plain forgot to do something, etc.

                                                                                                                                                      • dehrmann 15 hours ago

                                                                                                                                                        Ah, yes. The good old "what idiot wrote this?" experience.

                                                                                                                                                        • Ntrails 5 hours ago

                                                                                                                                                          Don't forget the revelation 2 weeks later when you realise immediate past you should've trusted deep past you instead of assuming he'd somehow got wiser in the intervening months.

                                                                                                                                                          Instead intermediate past you broke things properly because they forgot about the edge deep past you was cautiously avoiding

                                                                                                                                                        • kstenerud 13 hours ago

                                                                                                                                                          I can always understand code I wrote even decades ago, but only because I use descriptive names, and strategic comments to describe why I'm using a particular approach, or to describe an API. If I fail to do that, it takes a lot of effort to remember what's going on.

                                                                                                                                                          • anonzzzies 13 hours ago

                                                                                                                                                            I have heard that before and never understood that; I understand code I wrote 40 years ago fine. I have issues understanding code by others, but my own I understand no matter when it was written. Of course others don't understand my code until they dive in and, like me theirs, forget how it works weeks after.

                                                                                                                                                            I do find all my old code, even from yesterday, total shite and it should be rewritten, but probably never will be.

                                                                                                                                                            • undefined 18 hours ago
                                                                                                                                                              [deleted]
                                                                                                                                                            • isaiahwp 19 hours ago

                                                                                                                                                              > A major project will discover that it has merged a lot of AI-generated code, a fact that may become evident when it becomes clear that the alleged author does not actually understand what the code does.

                                                                                                                                                              "Oh Machine Spirit, I call to thee, let the God-Machine breathe half-life unto thy data flow and help me comprehend thy secrets."

                                                                                                                                                              • bodge5000 14 hours ago

                                                                                                                                                                And they told me laptop-safe sacred oils and a massive surplus of red robes were a "bad investment", look who's laughing now

                                                                                                                                                                • merksoftworks 19 hours ago

                                                                                                                                                                  That's how ye' get yerself Tzeench'd

                                                                                                                                                                • aithrowawaycomm 17 hours ago

                                                                                                                                                                  > Meanwhile, we will see more focused efforts to create truly free generative AI systems, perhaps including the creation of one or more foundations to support the creation of the models

                                                                                                                                                                  I understand this will be free-as-in-beer and free-as-in-freedom... but if it's also free-as-in-"we downloaded a bunch of copyrighted material without paying for it" then I have no interest in using it myself. I am not sure there even is enough free-as-in-ethical stuff to build a useful LLM. (I am aware people are trying, maybe they've had success and I missed it.)

                                                                                                                                                                  • ASalazarMX 4 hours ago

                                                                                                                                                                    I don't think blindly abiding to copyright is the higher moral instance here, even if it's the law. Knowledge wants to be free, and the way AIs need to be trained now is a sign that copyright laws have become unreasonably restrictive and commercialized.

                                                                                                                                                                    Not only AIs should be allowed to train on pirated content, humans should too. Copyright laws need to be scaled back so that creators are protected for a reasonable period, but humanity is not gated out of its culture for decades. The cheaper culture distribution has become, the harsher copyright laws have evolved.

                                                                                                                                                                    • reaperducer 14 hours ago

                                                                                                                                                                      free-as-in-"we downloaded a bunch of copyrighted material without paying for it"

                                                                                                                                                                      That's "free-as-in-load."

                                                                                                                                                                    • openrisk 9 hours ago

                                                                                                                                                                      Overwhelming fraction of comments focus on the "AI contributed code" while back to reality:

                                                                                                                                                                      > Global belligerence will make itself felt in our community. The world as a whole does not appear to be headed in a peaceful direction

                                                                                                                                                                      If the geopolitical landscape continues deteriorating the tech universe as we knew it will cease to exist. Fragmentation is already a reality in egregious cases but the dynamic could become much more prevalent.

                                                                                                                                                                      • The_Colonel 8 hours ago

                                                                                                                                                                        Kinda depends on what you mean exactly. For example, open source world will likely not be affected aside from a few cases like the Russian Linux developers. Neither China nor Russia are likely to completely block access to internet and developers won't have any incentives to do isolate themselves.

                                                                                                                                                                        • openrisk 7 hours ago

                                                                                                                                                                          That sounds quite optimistic. It doesn't take complete blocking before there are significant implications. There are many aspects to consider, from more friction in getting access to distribution channels to the more fundamental "forking" of initiatives and visions. This might be already happening to some degree but is hard to quantify.

                                                                                                                                                                          • The_Colonel 6 hours ago

                                                                                                                                                                            > It doesn't take complete blocking before there are significant implications.

                                                                                                                                                                            Mostly for consumers. Advanced users in e.g. China (likely in Russia as well) use VPNs routinely already.

                                                                                                                                                                            > from more friction in getting access to distribution channels to the more fundamental "forking" of initiatives and visions

                                                                                                                                                                            What's in it for the devs/companies to fork just because of the geopolitical situation? A fork means more work, more costs. In some cases, like the Linux kernel, Russian companies (Baikal) are forced to fork, but I don't seem them doing this on a massive scale for projects where they don't have to.

                                                                                                                                                                            I think there is some parallel development going on in China, but that's more because of the language/cultural barrier and has always been so, so I don't expect a major change.

                                                                                                                                                                      • dgfitz 20 hours ago

                                                                                                                                                                        Ignoring all the points made, this was a very pleasant reading experience.

                                                                                                                                                                        Not ignoring the points made, I cannot put my finger on where LLMs land in 2025. I do not think any sort of AGI type of phenomenon will happen.

                                                                                                                                                                        • tkgally 19 hours ago

                                                                                                                                                                          Yes, it was a good read. As someone with no direct connection to Linux or open-source development, I was surprised to find myself reading to the end. And near the end I found this comment particularly wise:

                                                                                                                                                                          > The world as a whole does not appear to be headed in a peaceful direction; even if new conflicts do not spring up, the existing ones will be enough to affect the development community. Developers from out-of-favor parts of the world may, again, find themselves excluded, regardless of any personal culpability they may have for the evil actions of their governments or employers.

                                                                                                                                                                        • anshulbhide 13 hours ago

                                                                                                                                                                          > A major project will discover that it has merged a lot of AI-generated code, a fact that may become evident when it becomes clear that the alleged author does not actually understand what the code does. We depend on our developers to contribute their own work and to stand behind it; large language models cannot do that. A project that discovers such code in its repository may face the unpleasant prospect of reverting significant changes.

                                                                                                                                                                          A lot of companies are going to discover in 2025. Also, a major product company is going to find LLM-generated code that might have been trained on OSS code, and their compliance team is going to throw a fit.

                                                                                                                                                                          • throwaway2037 16 hours ago

                                                                                                                                                                                > the launch of one or more foundations aimed specifically at providing support for maintainers
                                                                                                                                                                            
                                                                                                                                                                            Doesn't Red Hat (and other similar companies) already fulfill this role?
                                                                                                                                                                            • usr1106 11 hours ago

                                                                                                                                                                              There are many widely used open source components without a maintainer who is allowed to work on them (enough) during paid working time.

                                                                                                                                                                            • lionkor 6 hours ago

                                                                                                                                                                              > A major project will discover that it has merged a lot of AI-generated code

                                                                                                                                                                              After a code review, at least the reviewer should know the feature well enough to maintain it. This is, at least in my experience, the main part of the job of the reviewer at the time of review: Understand what the code does, why it does it, how it does it, such that you agree with it as if it's code you've written.

                                                                                                                                                                              If major projects merge code because "lgtm" is taken literally, then they have been merging bogus code before LLMs.

                                                                                                                                                                              • sebazzz 3 hours ago

                                                                                                                                                                                On the single maintainer subject: I wonder if there has been precedent that a single maintainer of a library has been treatened or corrupted by a state level actor to incorporate certain code?

                                                                                                                                                                                • spjt 6 hours ago

                                                                                                                                                                                  > single-maintainer projects (or subsystems, or packages) will be seen as risky

                                                                                                                                                                                  I would actually see a single-maintainer project as less risky. Looking at the XZ backdoor issue in particular, nobody even knows who the person is that introduced it. With a single-maintainer project, you only have to trust one person, who is often a known quantity.

                                                                                                                                                                                  • steeleduncan 5 hours ago

                                                                                                                                                                                    > global belligerence will make itself felt in our community

                                                                                                                                                                                    Sadly this has already happened. The Israel/Palestine situation was frequently referenced during the bitterest arguments in the NixOS community governance issues last year

                                                                                                                                                                                    • SoftTalker 15 hours ago

                                                                                                                                                                                      sched-ext sounds interesting. Anyone doing any work with it? Wondering if it's one of those things that sounds cool but probably is only suitable in some very specific use cases.

                                                                                                                                                                                    • divbzero 17 hours ago

                                                                                                                                                                                      > we will see more focused efforts to create truly free generative AI systems, perhaps including the creation of one or more foundations to support the creation of the models

                                                                                                                                                                                      What are the biggest barriers to making this a reality? The training data or the processing power?

                                                                                                                                                                                      Which open-source projects, if any, are the farthest along in this effort?

                                                                                                                                                                                      • guappa 11 hours ago

                                                                                                                                                                                        The costs in hardware and electricity are incredible. To do the same as the big companies are doing is impossible, there is no funding to achieve it.

                                                                                                                                                                                        The question is if it's needed at all to get good results.

                                                                                                                                                                                        Also the big companies have many lawyers so they feel confident to systematically violate copyright, but a smaller entity could probably not afford the same risk.

                                                                                                                                                                                        • brianbest101 7 hours ago

                                                                                                                                                                                          [dead]

                                                                                                                                                                                        • kunley 10 hours ago

                                                                                                                                                                                          Love the note on rejecting ai-generated code and about the alleged authors who don't understand what the code does.

                                                                                                                                                                                          • undefined 7 hours ago
                                                                                                                                                                                            [deleted]
                                                                                                                                                                                            • jbarrow 6 hours ago

                                                                                                                                                                                              > There will be more cloud-based products turned to bricks by manufacturers that go bankrupt or simply stop caring.

                                                                                                                                                                                              This one feels like a gimme. The recent Garmin outage that partially bricked the Connect app was a bit of a surprise; so much of what Garmin Connect does _should be_ local to the phone. Plus it's a free service (after you've paid for the device).

                                                                                                                                                                                              "You'll own nothing and you'll be happy" doesn't only apply to media/digital goods, but a lot of hardware at this point. :/

                                                                                                                                                                                              • vivzkestrel 16 hours ago

                                                                                                                                                                                                hey OP what were your predictions for 2024, mind sharing here?

                                                                                                                                                                                              • motohagiography 4 hours ago

                                                                                                                                                                                                > Another XZ-like backdoor attempt will come to light.

                                                                                                                                                                                                it may not. if I discovered an operation like this, I'd probably find a way to prove it and then set up a monero wallet and say it's going to cost the creepy agency whoever they are $100k USD a month to not publish. there are others who say this has already happened.

                                                                                                                                                                                                • BirAdam 17 hours ago

                                                                                                                                                                                                  Didn’t the leader of the kernel rust team resign in September?

                                                                                                                                                                                                • ekianjo 11 hours ago

                                                                                                                                                                                                  > Distributions for mobile devices will see a resurgence in interest in the coming year.

                                                                                                                                                                                                  They must have missed the news about divestos closing shop

                                                                                                                                                                                                  • AtlasBarfed 18 hours ago

                                                                                                                                                                                                    Linux will politically continue to fail to extract needed monetary support from first world countries and mega corps principally dependant on it.

                                                                                                                                                                                                    In particular, my libraries and national security concerns.

                                                                                                                                                                                                    The US government has its underwear in a bunch over various Chinese sources hardware, but continues to let a bunch of hobbyists maintain the software.

                                                                                                                                                                                                    I almost think it is time to hold these massive orgs accountable by merging targeted vulnerabilities and performance bombs unless they start paying up. Microsoft and other monopolized software companies have no issue using whatever tactics are necessary to shale the revenue from software dependent/ addicted orgs.

                                                                                                                                                                                                    • not2b 17 hours ago

                                                                                                                                                                                                      Most Linux kernel contributors are professionals who are paid for their work. They aren't hobbyists.

                                                                                                                                                                                                      However, there are quite a few critically important tools and libraries that are essentially maintained by a volunteer as a hobby, and yes, that's a risk.

                                                                                                                                                                                                      • SoftTalker 16 hours ago

                                                                                                                                                                                                        Hence the observation that "single-maintainer projects (or subsystems, or packages) will be seen as risky".

                                                                                                                                                                                                        • AtlasBarfed 5 hours ago

                                                                                                                                                                                                          There are trillions of dollars of budgeted organizations dependent on Linux.

                                                                                                                                                                                                          I'm talking about serious hundreds of millions funded foundations on par with windows at least to some scale.

                                                                                                                                                                                                          The US government should get forking over tens of millions. Hell, it should be part of the AWS contract with the government that they fund Linux foundation that tune.

                                                                                                                                                                                                          Everyone crying over some reputation smear on Linux programmers are missing the goddamn point, especially on the desktop front.

                                                                                                                                                                                                          If the US wants to continue to have wide open vulnerable consumer networks, then I guess windows will make us fundamentally vulnerable. The US military needs a consumer tier secure Linux desktop. And if rather it wasn't android corporate spyware because otherwise that is what we are getting.

                                                                                                                                                                                                          I guess I just answered my question. Android for everyone.

                                                                                                                                                                                                        • jahewson 15 hours ago

                                                                                                                                                                                                          Per Wikipedia:

                                                                                                                                                                                                          “An analysis of the Linux kernel in 2017 showed that well over 85% of the code was developed by programmers who are being paid for their work”

                                                                                                                                                                                                          https://en.m.wikipedia.org/wiki/Linux

                                                                                                                                                                                                          • The_Colonel 13 hours ago

                                                                                                                                                                                                            I would bet the percentage increased since then.

                                                                                                                                                                                                          • spencerflem 14 hours ago

                                                                                                                                                                                                            If you don't want corporations using your software, don't put it out in a license that invites them to do so. (illegal scraping by ai notwithstanding)

                                                                                                                                                                                                            • guappa 11 hours ago

                                                                                                                                                                                                              I want them to use it, I don't want them opening issues to request new features.

                                                                                                                                                                                                            • nindalf 16 hours ago

                                                                                                                                                                                                              Yeah bashing big tech is an evergreen source of upvotes. Especially since it’s not always clear how something was funded. Take io_uring for example, an async I/O subsystem for Linux. Could you say offhand if this was funded by some big tech company or not? I’ll bet most people couldn’t.

                                                                                                                                                                                                              Another example - everyone knows the xz attack. How many people can name offhand the company where Andres Freund worked? He was a full time employee of a tech company working on Postgres when he found this attack.

                                                                                                                                                                                                              It’s always worth discussing how we can improve financial situation for maintainers in important open source projects. Hyperbole like your comment is useless at best and counterproductive at worst.

                                                                                                                                                                                                            • undefined 4 hours ago
                                                                                                                                                                                                              [deleted]