• goalieca 2 days ago

    Anecdote here, but when I was in grad school, I was talking to a PhD student i respected a lot. Whenever he read a paper, he would try to write the code out and get it working. I would take a couple of months but he could whip it up in a few days. He explained to me that it was just practice and the more you practice the better you become. He not only coded things quickly, he started analyzing papers quicker too and became really good at synthesizing ideas, knowing what worked and didn't, and built up a phenomenal intuition.

    These days, I'm fairly senior and don't touch code much anymore but I find it really really instructive to get my hands dirty and struggle through new code and ideas. I think the "just tweak the prompts bro" people are missing out on learning.

    • benterix 2 days ago

      We are literally witnessing the skills split right in front of our eyes: (1) people who are able to understand the concepts deeply, build a mental model of it and implement them in code at any level, and (2) people who outsource it to a machine and slowly, slowly lose that capability.

      For now the difference between these two populations is not that pronounced yet but give it a couple of years.

      • CuriouslyC 2 days ago

        We're just moving up the abstraction ladder, like we did with compilers. I don't care about the individual lines of code, I care about architecture, code structure, rigorous automated e2e tests, contracts with comprehensive validation, etc. Rather than waste a bunch of time pouring over agent PRs I just make them jump over extremely high static/analytic hurdles that guarantee functionality, then my only job is to identify places where the current spec and the intended functionality differ and create a new spec to mitigate.

        • e3bc54b2 2 days ago

          As the other comment said, LLMs are not an abstraction.

          An abstraction is a deterministic, pure function, than when given A always returns B. This allows the consumer to rely on the abstraction. This reliance frees up the consumer from having to implement the A->B, thus allowing it to move up the ladder.

          LLMs, by their very nature are probabilistic. Probabilistic is NOT deterministic. Which means the consumer is never really sure if given A the returned value is B. Which means the consumer now has to check if the returned value is actually B, and depending on how complex A->B transformation is, the checking function is equivalent in complexity as implementing the said abstraction in the first place.

          • stuartjohnson12 2 days ago

            It's delegation then.

            We can use different words if you like (and I'm not convinced that delegation isn't colloquially a form of abstraction) but you can't control the world by controlling the categories.

            • hoppp 2 days ago

              Delegation of intelligence? So one party gets more stupid for the other to be smart?

              • arduanika 2 days ago

                Yes, just like moving into management. So we'll get a generation of programmers who get to turn prematurely into the Pointy-Haired Boss.

                • ThrowawayR2 2 days ago

                  I like that. To paraphrase the Steinbeck (mis)quote: "Hacker culture never took root in the AI gold rush because the LLM 'coders' saw themselves not as hackers and explorers, but as temporarily understaffed middle-managers."

                  • rewgs a day ago

                    This is fantastic, stealing this.

                • benterix 2 days ago

                  Except that (1) the other party doesn't become smart, (2) the one who delegates doesn't become stupid, it just loses the opportunity to become smarter when compared to a human who'd actually do the work.

                  • soraminazuki 2 days ago

                    You're in denial. (1) The other party keeps learning, (2) the article cites evidence showing that heavy AI use causes cognitive decline.

                    • casey2 2 days ago

                      The evidence it cites is that paper from 3 months ago claiming your brain activates less while prompting than actually writing an essay. No duh, the point is that you flex your mental muscles on the tasks AI can't do, like effective organization. I don't need to make a pencil to write.

                      The most harmful myth in all of education is the idea that you need to master some basic building blocks in order to move on to a higher level. That really is just a noticeable exception. At best you can claim that it's difficult for other people to realize that your new way solves the problem, or that people should really learn X because it's generally useful.

                      I don't see the need for this kind of compulsory education, and it's doing much more harm than good. Bodybuilding doesn't even appear as a codified sport until well after the industrial revolution, it's not until we are free of sustenance labor that human intelligence will peak. Who would be happy with a crummy essay if humans could learn telekinesis?

                      • soraminazuki 2 days ago

                        That's a lot of words filled with straw man analogies. Essentially, you're claiming that you can strengthen your cognitive skills by having LLMs do all the thinking for you, which is absurd. And the fact that the study is 3 months old doesn't invalidate the work.

                        > Who would be happy with a crummy essay if humans could learn telekinesis?

                        I'm glad that's not the professional consensus on education, at least for now. And "telekinesis," really?

                        • bigbadfeline a day ago

                          > No duh, the point is that you flex your mental muscles on the tasks AI can't do, like effective organization.

                          AI can do better organization than you, it's only inertia and legalities that prevent it from happening. See, without good education, you aren't even able to find a place for yourself.

                          > The most harmful myth in all of education is the idea that you need to master some basic building blocks in order to move on to a higher level.

                          That "myth" is supported by abundant empirical evidence, people have tried education without it and it didn't work. My lying eyes kind of confirm it too, I had one hell of time trying to use LLM without getting dumber... it comes so natural to them, skipping steps is seductive but blinding.

                          > I don't see the need for this kind of compulsory education, and it's doing much more harm than good.

                          Again, long standing empirical evidence tells as the opposite. I support optional education but we can't even have a double blind study for it - I'm pretty sure those who don't go to school would be home-schooled, too few are dumb enough to let their uneducated children chose their manner and level of education.

                      • lazystar 2 days ago

                        well, then it comes down to which skillset is more marketable - the delegator, or the codong language expert.

                        customers dont care about the syntactic sugar/advanced reflection in the codebase of the product that theyre buying. if the end product of the delegator and the expert is the same, employers will go with the faster one every time.

                        • ModernMech 2 days ago

                          That's how you end up in the Idiocracy world, where things still happen, but they are driven by ads rather than actual need, no one really understands how anything works, somehow society plods along due to momentum, but it's all shit from top to bottom and nothing is getting better. "Brawndo: it's got what plants crave!" is the end result of being lead around by marketers.

                          • lazystar 8 hours ago

                            isnt this what assembly devs would have said about c devs, and c devs abput python devs?

                      • charcircuit 2 days ago

                        It's not 0 sum. All parties can become more intelligent over time.

                        • matt_kantor 2 days ago

                          They could, but you're commenting on a study whose results indicate that this isn't what happens.

                          • charcircuit 2 days ago

                            And you are in a comment chain discussing how there is a subset of people where the study is not true.

                            • beeflet 2 days ago

                              There is? You haven't proven anything

                              • rstuart4133 2 days ago

                                Haven't you been paying attention? He probably heard it from an AI. That's the only proof needed. Why he put in any more effort? /s

                              • dvfjsdhgfv 2 days ago

                                Rather a subset of people who would like to believe the results don't apply to them.

                                Frankly, I'm sure there will be much more studies in this direction. Now this is a university, an independent organization. But, given the amount of money involved, some of future studies will come from the camp vitally interested in people believing that by outsourcing their work to coding agents they are becoming smarter instead of losing achieved skills. Looking forward to reading the first of these.

                                • charcircuit 2 days ago

                                  Outsourcing work doesn't make you smarter. It makes you more productive. It gives you extra time that you can dedicate towards becoming smarter at something else.

                                  • soraminazuki 2 days ago

                                    Become smarter at what exactly? People reliant on AI aren't going to use AI on just one thing, they're going to use it for everything. Besides, as others have pointed out to you, the study shows evidence that AI reliance causes cognitive decline. It affects your general intelligence, not limited to a single area of expertise.

                                    > Students who repeatedly relied on ChatGPT showed weakened neural connectivity, impaired memory recall, and diminished sense of ownership over their own writing

                                    So we're going to have more bosses, perhaps not in title, who think they're becoming more knowledgeable about a broad range of topics, but are actually in cognitive decline and out of touch with reality on the ground. Great.

                        • robenkleene 2 days ago

                          One argument for abstraction being different from delegation, is when a programmer uses an abstraction, I'd expect the programmer to be able to work without the abstraction, if necessary, and also be able to build their own abstractions. I wouldn't have that expectation with delegation.

                          • vidarh 2 days ago

                            The vast majority of programmers don't know assembly, so can in fact not work without all the abstractions they rely on.

                            Do you therefore argue programming languages aren't abstractions?

                            • benterix 2 days ago

                              > The vast majority of programmers don't know assembly, so can in fact not work without all the abstractions they rely on.

                              The problem with this analogy is obvious when you imagine an assembler generating machine code that doesn't work half of the time and a human trying to correct that.

                              • vidarh 2 days ago

                                An abstraction doesn't cease to be one because it's imperfect, or even wrong.

                                • nerdsniper 2 days ago

                                  I mean, it’s more like 0.1% of the time but I’ve definitely had to do this in embedded programming on ARM Cortex M0-M3. Sometimes things just didn't compile the way I expected. My favorite was when I smashed the stack and I overflowed ADC readings into the PC and SP, leading to the MCU jumping completely randomly all over the codebase. Other times it was more subtle things, like optimizing away some operation that I needed to not be optimized away.

                                • maltalex 2 days ago

                                  > Do you therefore argue programming languages aren't abstractions?

                                  Yes, and no. They’re abstractions in the sense of hiding the implementation details of the underlying assembly. Similarly, assembly hides the implementation details of the cpu, memory, and other hw components.

                                  However, except with programming languages you don’t need to know the details of the underlying layers except for very rare cases. The abstraction that programming languages provide is simple, deterministic, and well documented. So, in 99.999% of cases, you can reason based on the guarantees of the language, regardless of how those guarantees are provided. With LLMs, the relation between input and output is much more loose. The output is non-deterministic, and tiny changes to the input can create enormous changes in the output seemingly without reason. It’s much shakier ground to build on.

                                  • impure-aqua a day ago

                                    I do not think determinism of behaviour is the only thing that matters for evaluating the value of an abstraction - exposure to the output is also a consideration.

                                    The behaviour of the = operator in Python is certainly deterministic and well-documented, but depending on context it can result in either a copy (2x memory consumption) or a pointer (+64bit memory consumption). Values that were previously pointers can also suddenly become copies following later permutation. Do you think this through every time you use =? The consequences of this can be significant (e.g. operating on a large file in memory); I have seen SWEs make errors in FastAPI multipart upload pipelines that have increased memory consumption by 2x, 3x, in this manner.

                                    Meanwhile I can ask an LLM to generate me Rust code, and it is clearly obvious what impact the generated code has on memory consumption. If it is a reassignment (b = a) it will be a move, and future attempts to access the value of a would refuse to compile and be highlighted immediately in an IDE linter. If the LLM does b = &a, it is clearly borrowing, which has the size of a pointer (+64bits). If the LLM did b = a.clone(), I would clearly be able to see that we are duplicating this data structure in memory (2x consumption).

                                    The LLM code certainly is non-deterministic; it will be different depending on the questions I asked (unlike a compiler). However, in this particular example, the chosen output format/language (Rust) directly exposes me to the underlying behaviour in a way that is both lower-level than Python (what I might choose to write quick code myself) yet also much, much more interpretable as a human than, say, a binary that GCC produces. I think this has significant value.

                                    • lock1 a day ago

                                      Unrelated to the gp post, but isn't LLMs more like a deterministic chaotic system than a "non-deterministic" one? "Tiny changes to the input can change the output quite a lot" is similar to "extreme sensitivity to initial condition" property of a chaotic system.

                                      I guess that could be a problematic behavior if you want reproducibility ala (relatively) reproducible abstraction like compilers. With LLMs, there are too many uncontrollable variables to precisely reproduce a result from the same input.

                                    • strix_varius 2 days ago

                                      This is a tautology. At some level, nobody can work at a lower level of abstraction. A programmer who knows assembly probably could not physically build the machine it runs on. A programmer who could do that probably could not smelt the metals required to make that machine. etc.

                                      However, the specific discussion here is about delegating the work of writing to an LLM, vs abstracting the work of writing via deterministic systems like libraries, frameworks, modules, etc. It is specifically not about abstracting the work of compiling, constructing, or smelting.

                                      • vidarh 2 days ago

                                        This is meaningless. An LLM is also deterministic if configured to be so, and any library, framework, module can be non-deterministic if built to be. It's not a distinguishing factor.

                                        • strix_varius 2 days ago

                                          That isn't how LLMs work.

                                          They are probabilistic. Running them on even different hardware yields different results. And the deltas compound the longer your context and the more tokens you're using (like when writing code).

                                          But more importantly, always selecting the most likely token traps the LLM in loops, reduces overall quality, and is infeasible at scale.

                                          There are reasons that literally no LLM that you use runs deterministically.

                                      • WD-42 2 days ago

                                        The vast majority of programmers could learn assembly, most of it in a day. They don’t need to, because the abstractions that generate it are deterministic.

                                        • robenkleene 2 days ago

                                          Fair point, I elaborated what I mean here https://news.ycombinator.com/item?id=45116976

                                          To address your specific point in the same way: When we're talking about programmers using abstractions, we're usually not talking about the programming language their using, we're talking about the UI framework, networking libraries, etc... they're using. Those are the APIs their calling with their code, and those are all abstractions that are all implemented at (roughly) the same level of abstraction as the programmer's day-to-day work. I'd expect a programmer to be able to re-implement those if necessary.

                                        • Jensson 2 days ago

                                          > I wouldn't have that expectation with delegation.

                                          Managers tend to hire sub managers to manage their people. You can see this with LLM as well, people see "Oh this prompting is a lot of work, lets make the LLM prompt the LLM".

                                          • robenkleene 2 days ago

                                            Note, I'm not saying there are never situations where you'd delegate something that you can do yourself (the whole concept of apprenticeship is based on doing just that). Just that it's not an expectation, e.g., you don't expect a CEO to be able to do the CTO's job.

                                            I guess I'm not 100% sure I agree with my original point though, should a programmer working on JavaScript for a website's frontend be able to implement a browser engine. Probably not, but the original point I was trying to make is I would expect a programmer working on a browser engine to be able to re-implement any abstractions that they're using in their day-to-day work if necessary.

                                            • AnIrishDuck 2 days ago

                                              The advice I've seen with delegation is the exact opposite. Specifically: you can't delegate what you can't do.

                                              Partially because of all else fails, you'll need to step in and do the thing. Partially because if you can't do it, you can't evaluate whether it's being done properly.

                                              That's not to say you need to be _as good_ at the task as the delegee, but you need to be competent.

                                              For example, this HBR article [1]. Pervasive in all advice about delegation is the assumption that you can do the task being delegated, but that you shouldn't.

                                              > Just that it's not an expectation, e.g., you don't expect a CEO to be able to do the CTO's job.

                                              I think the CEO role is actually the outlier here.

                                              I can only speak to engineering, but my understanding has always been that VPs need to be able to manage individual teams, and engineering managers need to be somewhat competent if there's some dev work that needs to be done.

                                              This only happens as necessary, and it obviously should be rare. But you get in trouble real quickly if you try to delegate things you cannot accomplish yourself.

                                              1. https://hbr.org/2025/09/why-arent-i-better-at-delegating

                                              • tguedes 2 days ago

                                                I think what you're trying to reference is APIs or libraries, most of which I wouldn't consider abstractions. I would hope most senior front-end developers are capable of developing a date library for their use case, but in almost all cases it's better to use the built in Date class, moment, etc. But that's not an abstraction.

                                                • meheleventyone 2 days ago

                                                  There's an interesting comparison in delegation where for example people that stop programming through delegation do lose their skills over time.

                                            • hosh 2 days ago

                                              There is a form of delegation that develops the people involved, so that people can continue to contribute and grow. Each individual can contribute what is unique to them, and grow more capable as they do so. Both people, and the community of those people remain alive, lively, and continue to grow. Some people call this paradigm “regenerative”; only living systems regenerate.

                                              There is another form of delegation where the work needed to be done is imposed onto another, in order to exploit and extract value. We are trying to do this with LLMs now, but we also did this during the Industrial Revolution, and before that, humanity enslaved each other to get the labor to extract value out of the land. This value extraction leads to degeneration, something that happens when living systems dies.

                                              While the Industrial Revolution afforded humanity a middle-class, and appeared to distribute the wealth that came about — resulting in better standards of living — it came along with numerous ills that as a society, we still have not really figured out.

                                              I think that, collectively, we figure that the LLMs can do the things no one wants to do, and so _everyone_ can enjoy a better standard of living. I think doing it this way, though, leads to a life without purpose or meaning. I am not at all convinced that LLMs are going to give us back that time … not unless we figure out how to develop AIs that help grow humans instead of replacing them.

                                              The following article is an example of what I mean by designing an AI that helps develop people instead of replacing them: https://hazelweakly.me/blog/stop-building-ai-tools-backwards...

                                              • salawat 2 days ago

                                                LLM's and AI in general is just a hack to reimplement slavery with an artificial being that is denied consideration as a being. Technical chattel, if you will, and if you've been paying attention in tech circles a lot of mental energy is being funneled into keeping the egghead's attention firmly in the "we don't want something that is" direction. Investors want robots that won't/can't say no.

                                                • ModernMech 2 days ago

                                                  What's interesting about this proposition, is that by the time you create a machine that's as capable in the way they want to replace humans, we'll have to start talking about robot personhood, because by then they will be indistinguishable from us.

                                                  I don't think you can get the kinds of robots they want without also inventing the artificial equivalent of soul. So their whole moral sidestep to reimplement slavery won't even work. Enslaving sapient beings is evil whether they are made of meat or metal.

                                                  • salawat 2 days ago

                                                    You are far too optimistic in terms of willingness of the moneyed to let something like a toaster having theoretical feelings get in the way of their Santa Claus machines.

                                                    • ModernMech 2 days ago

                                                      Seeing as they call us NPCs, I'm pretty sure they think all our feelings are theoretical.

                                            • TheOtherHobbes 2 days ago

                                              Human developers by their very nature are probabilistic. Probabilistic is NOT deterministic. Which means the manager is never really sure if the developer solved the problem, or if they introduced some bugs, or if their solution is robust and ideal even when it seems to be working.

                                              All of which is beside the point, because soon-ish LLMs are going to develop their own equivalents of experimentation, formalisation of knowledge, and collective memory, and then solutions will become standardised and replicable - likely with a paradoxical combination of a huge loss of complexity and solution spaces that are humanly incomprehensible.

                                              The arguments here are like watching carpenters arguing that a steam engine can't possibly build a table as well as they can.

                                              Which - is you know - true. But that wasn't how industrialisation worked out.

                                              • threatofrain 2 days ago

                                                So it's a noisy abstraction. Programmers deal with that all the time. Whenever you bring in an outside library or dependency there's an implicit contract that you don't have to look underneath the abstraction. But it's noisy so sometimes you do.

                                                Colleagues are the same thing. You may abstract business domains and say that something is the job of your colleague, but sometimes that abstraction breaks.

                                                Still good enough to draw boxes and arrows around.

                                                • delfinom 2 days ago

                                                  Noisy is an understatement, it's buggy, it's error filled, it's time consuming and inefficient. It's exact opposite of automation but great for job security.

                                                  • soraminazuki 2 days ago

                                                    It's unfortunately not great for job security either. Do you know how Google massively underinvests in support? Their support is mostly automated and is only good at shooing people away. Many companies would jump at the opportunity to adopt AI and accept massive declines in quality as long as it results in cost savings. Working people and customers will get screwed hard.

                                                  • soraminazuki 2 days ago

                                                    Competent programmers use well established libraries and dependencies, not ones that are unreliable as LLMs.

                                                  • Paradigma11 2 days ago

                                                    "LLMs, by their very nature are probabilistic."

                                                    So are humans and yet people pay other people to write code for them.

                                                    • const_cast 2 days ago

                                                      Yes but we don't call humans abstractions. A software engineer isn't an abstraction over code.

                                                      • threatofrain 2 days ago

                                                        No, but depending on your governance structure, we have software engineers abstract over domains. And then we draw boxes and arrows around the works of your colleagues without looking inside the box.

                                                        • skydhash 2 days ago

                                                          You wish! Bus factor risk is why you don’t do this. Having siloed knowledge is one of the first steps towards engineering, unless someone else code is proven bug free, you don’t usually rely on that. You just have someone to throw bug tickets at.

                                                          • threatofrain 2 days ago

                                                            Very true, my brain is stuck in scaling out from small teams. In that world, you can't help but accept plenty of bus factor, and once you get to enough people making sure everyone understands each others' domains is a bit too much.

                                                            • skydhash 2 days ago

                                                              EDIT

                                                              *towards bad engineering, unless*

                                                        • benterix 2 days ago

                                                          Yeah but in spite of that if you ask me take a Jira ticket and do it properly, there is a much higher chance that I'll do it reliably and the rest of my team will be satisfied, whereas if I bring an LLM into the equation it will wreak havoc (I've witnessed a few cases and some people got fired, not really for using LLMs but for not reviewing their output properly - which I can even understand somehow as reviewing code is much less fun than creating it).

                                                          • zasz 2 days ago

                                                            Yeah and the people paying other people to write code won't understand how the code works. AI as currently deployed stands a strong chance of reducing the ranks of the next generation of talented devs.

                                                          • groby_b 2 days ago

                                                            > An abstraction is a deterministic, pure function

                                                            That must be why we talk about leaky abstractions so much.

                                                            They're neither pure functions, nor are they always deterministic. We as a profession have been spoilt by mostly deterministic code (and even then, we had a chunk of probabilistic algorithms, depending on where you worked).

                                                            Heck, I've worked with compilers that used simulated annealing for optimization, 2 decades ago.

                                                            Yes, it's a sea change for CRUD/SaaS land. But there are plenty of folks outside of that who actually took the "engineering" part of software engineering seriously, and understand just fine how to deal with probabilistic processes and risk management.

                                                            • pmarreck 2 days ago

                                                              > LLMs, by their very nature are probabilistic

                                                              I believe that if you can tweak the temperature input (OpenAI recently turned it off in their API, I noticed), an input of 0 should hypothetically result in the same output, given the same input.

                                                              • bdhcuidbebe 2 days ago

                                                                That only works if you decide to stick to that exact model for the rest of your life, obviously.

                                                                • oceanplexian 2 days ago

                                                                  The point is he said "by its nature". A transformer based LLM when called with the same inputs/seed/etc is literally the textbook definition of a deterministic system.

                                                                • sarchertech 2 days ago

                                                                  No one uses temperature 0 because the results are terrible.

                                                                • oceanplexian 2 days ago

                                                                  > LLMs, by their very nature are probabilistic.

                                                                  This couldn't be any more wrong. LLMs are 100% deterministic. You just don't observe that feature because you're renting it from some cloud service. Run it on your own hardware with a consistent seed, and it will return the same answer to the same prompt every time.

                                                                  • maltalex 2 days ago

                                                                    That’s like arguing that random number generators are not random if you give them a fixed seed. You’re splitting hairs.

                                                                    LLMs, as used in practice in 99.9% of cases, are probabilistic.

                                                                    • kbelder 2 days ago

                                                                      I think 'chaotic' is a better descriptor than 'probabilistic'. It certainly follows deterministic rules, unless randomness is deliberately injected. But the interaction of the rules and the context the operate in is so convoluted that you can't trace an exact causal relationship between the input and output.

                                                                      • ModernMech 2 days ago

                                                                        It's chaotic in general. The randomness makes it chaotic and nondeterministic. Chaotic systems aren't that bad to work with as long as they are deterministic. Chaotic + nondeterministic is like building on quicksand.

                                                                    • CuriouslyC 2 days ago

                                                                      Ok, let's call it a stochastic transformation over abstraction spaces. It's basically sampling from the set of deterministic transformations given the priors established by the prompt.

                                                                      • soraminazuki 2 days ago

                                                                        You're bending over backwards to imply that it's deterministic without saying it is. It's not. LLMs, by its very nature, don't have a well-defined relationship between its input and output. It makes tons of mistakes that's utterly incomprehensible because of that.

                                                                      • chermi 2 days ago

                                                                        Just want to commend you for the perfect way of describing this re. not being an abstraction

                                                                        • upcoming-sesame 2 days ago

                                                                          agree but does this distinction really make a difference ? I think the OP point is still valid

                                                                          • glitchc 2 days ago

                                                                            > LLMs, by their very nature are probabilistic. Probabilistic is NOT deterministic.

                                                                            Although I'm on the side of getting my hands dirty, I'm not sure if the difference is that different. A modern compiler embeds a considerable degree of probabilistic behaviour.

                                                                            • ashton314 2 days ago

                                                                              Compilers use heuristics which may result in dramatically different results between compiler passes. Different timing effects during compilation may constrain certain optimization passes (e.g. "run algorithm x over the nodes and optimize for y seconds") but in the end the result should still not modify defined observable behavior, modulo runtime. I consider that to be dramatically different than the probabilistic behavior we get from an LLM.

                                                                              • davidrupp 2 days ago

                                                                                > A modern compiler embeds a considerable degree of probabilistic behaviour.

                                                                                Can you give some examples?

                                                                                • glitchc a day ago

                                                                                  Any time the language specification is undefined, the compiler behaviour will be probabilistic at best. Here's an example for C:

                                                                                  https://wordsandbuttons.online/so_you_think_you_know_c.html

                                                                                  • hn_acc1 2 days ago

                                                                                    There are pragmas you can give to a compiler to tell it to "expect that this code path is (almost) never followed". I.e. if you have an assert on nullptr, for example. You want it to assume the assert rarely gets triggered, and highly optimize instruction scheduling / memory access for the "not nullptr" case, but still assert (even if it's really, REALLY slow, relatively speaking) to handle the nullptr case.

                                                                                    • WD-42 2 days ago

                                                                                      I keep hearing this but it’s a head scratcher. They might be thinking of branch prediction, but that’s a function of the cpu, not the compiler.

                                                                                      • ModernMech 2 days ago

                                                                                        It’s not that they embed probabilistic behavior per se. But more like they are chaotic systems, in that a slight change of input can drastically change the output. But ideally, good compiler design is idempotent — given the same input, the output should always be the same. If that were not generally true, programming would be much harder than it is.

                                                                                        • glitchc a day ago

                                                                                          No, it can also vary on the input. The -ffast-math flag in gcc is a good example.

                                                                                    • eikenberry 2 days ago

                                                                                      Local models can be deterministic and that is one of the reasons why they will win out over service based models once the hardware becomes available.

                                                                                      • bckr 2 days ago

                                                                                        The LLM is not part of the application.

                                                                                        The LLM expands the text of your design into a full application.

                                                                                        The commenter you’re responding to is clear that they are checking the outputs.

                                                                                        • RAdrien 2 days ago

                                                                                          This is an excellent reply

                                                                                          • rajap 2 days ago

                                                                                            with proper testing you can make suer that given A the returned value is B

                                                                                            • charcircuit 2 days ago

                                                                                              >LLMs, by their very nature are probabilistic.

                                                                                              So are compilers, but people still successfully use them. Compilers and LLMs can both be made deterministic but for performance reasons it's convenient to give up that guarantee.

                                                                                              • hn_acc1 2 days ago

                                                                                                AIUI, if you made an LLM deterministic, every mostly-similar prompt would return the same result (i.e. access the same training data set) and if that's wrong, the LLM is just plain broken for that example. Hacked-in "temperature" (randomness) is the only way to hopefully get a correct result - eventually.

                                                                                                • WD-42 2 days ago

                                                                                                  What are these non deterministic compilers I keep hearing about, honestly curious.

                                                                                                  • charcircuit 2 days ago

                                                                                                    For example looping over the files in a directory can happen in a different order depending on the order the files were created in. If you are linking a bunch of objects the order typically matters. If the compiler is implemented correctly the resulting binary should functionally be the same but the binary itself may not be exactly the same. Or even when implemented correctly you will see cases where different objects can be the one to define a duplicate symbol depending on their relative order.

                                                                                                    • ModernMech 2 days ago

                                                                                                      That's not nondeterminism though, you've changed the input (the order of the files). Nondeterminism would be if the binary changes despite the files being in the same order. If the binary is the same holding fixed the order of the files, then the output is deterministic.

                                                                                                    • PhunkyPhil 2 days ago

                                                                                                      GCC can use randomized branch prediction.

                                                                                                  • daveguy 2 days ago

                                                                                                    > An abstraction is a deterministic, pure function, than when given A always returns B.

                                                                                                    That is just not correct. There is no rule that says an abstraction is strictly functional or deterministic.

                                                                                                    In fact, the original abstraction was likely language, which is clearly neither.

                                                                                                    The cleanest and easiest abstractions to deal with have those properties, but they are not required.

                                                                                                    • robenkleene 2 days ago

                                                                                                      This is such a funny example because language is the main way that we communicate with LLMs. Which means you can make tie both of your points together in the same example: If you take a scene and describe it in words, then have an LLM reconstruct the scene from the description, you'd likely get a scene that looks very different then the original source. This simultaneous makes both your point and the person you're responding to's point:

                                                                                                      1. Language is an abstraction and it's not deterministic (it's really lossy)

                                                                                                      2. LLMs behave differently than the abstractions involved in building software, where normally if you gave the same input, you'd expect the same output.

                                                                                                      • daveguy 2 days ago

                                                                                                        Yes, most abstractions are not as clean as leak free functional abstractions. Most abstractions in the world are leaky and lossy. Abstraction was around long before computers were invented.

                                                                                                      • beepbooptheory 2 days ago

                                                                                                        What is the thing that language itself abstracts?

                                                                                                        • fkyoureadthedoc 2 days ago

                                                                                                          Your thought's I'd say, but it's more of a two way street than what I think of as abstraction.

                                                                                                          • daveguy 2 days ago

                                                                                                            Okay, language was the original vehicle for abstraction if everyone wants to get pedantic about it. And yes, abstraction of thought. Only in computerland (programming, mathematics and physics) do you even have the opportunity to have leak-free functional abstractions. That is not the norm. LLM-like leaky abstractions are the norm.

                                                                                                            • beepbooptheory 2 days ago

                                                                                                              This is clearly not true. For example, the Pythagorean theorem is an old, completely leak free, abstraction with no computer required.

                                                                                                              Sorry for being pedantic, I was just curious what you mean at all. Language as abstraction of thought implies that thought is always somehow more "general" than language, right? But if that was the case, how could I read a novel that brings me to tears? Is not my thought in this case more the "lossy abstraction" of the language than the other way around?

                                                                                                              Or, what is the abstraction of the "STOP" on the stop sign at the intersection?

                                                                                                              • daveguy a day ago

                                                                                                                Psst. Mathematics didn't come before language.

                                                                                                    • blibble 2 days ago

                                                                                                      I can count the amount of times in my 20 year career that I've had to look at compiler generated assembly on one finger

                                                                                                      and I've never looked at the machine code produced by an assembler (other than when I wrote my own as a toy project)

                                                                                                      is the same true of LLM usage? absolutely not

                                                                                                      and it never will be, because it's not an abstraction

                                                                                                      • KoolKat23 2 days ago

                                                                                                        It's still early stages, that is why.

                                                                                                        It is not yet good enough or there is not yet sufficient trust. Also there are still resources allocated to checking the code.

                                                                                                        I saw a post yesterday showing Brave browser's new tab using 70mb of RAM in the background. I'm very sure there's code there that can be optimized, but who gives a shit. It's splitting hairs and our computers are powerful enough now that it doesn't matter.

                                                                                                        Immateriality has abstracted that particular few line codes away.

                                                                                                        • LandR 2 days ago

                                                                                                          >> I saw a post yesterday showing Brave browser's new tab using 70mb of RAM in the background. I'm very sure there's code there that can be optimized, but who gives a shit.

                                                                                                          I do. This sort of attitude is how we have machines more powerful than ever yet everything still seems to run like shit.

                                                                                                          • const_cast 2 days ago

                                                                                                            This is barely related but I bet that the extra 70 mb of ram isn't even waste - it's probably an optimization. Its possible they're spinning up a JS VM preemptively so when you do navigate you have a hot interpreter for the inevitable script. Maybe they allocate memory for the DOM too.

                                                                                                            • KoolKat23 2 days ago

                                                                                                              Probably the case, I felt bad using this as an example as I don't know the specifics, but thought it is an easy way to convey my point (sorry if so Brave developers).

                                                                                                          • ncruces 2 days ago

                                                                                                            > It's still early stages, that is why.

                                                                                                            Were we advised to check compiler output every single time "in the early days"?

                                                                                                            No, that's not the difference.

                                                                                                            A compiler from whatever high/low level language is expected to translate a formal specification of an algorithm faithfully. If it fails to do so, the compiler is buggy, period.

                                                                                                            A LLM is expected to understand fuzzy language and spit out something that makes sense.

                                                                                                            It's a fundamentally different task, and I trust a human more with this. Certainly, humans are judged by their capability to do this, apply common sense, ask for necessary clarification, also question what they're being asked to do.

                                                                                                            • rafterydj 2 days ago

                                                                                                              I feel like I'm taking crazy pills or misunderstanding you. Shouldn't it matter that they are using 70mb of RAM more or less totally wastefully? Maybe not a deal breaker for Brave, sure, but waste is waste.

                                                                                                              I understand the world is about compromises, but all the gains of essentially every computer program ever could be summed up by accumulation of small optimizations. Likewise, the accumulation of small wastes kills legacy projects more than anything else.

                                                                                                              • Mtinie 2 days ago

                                                                                                                It could matter but what isn't clear to me is if 70MB is wasteful in this specific context. Maybe? Maybe not?

                                                                                                                Flagging something as potentially problematic is useful but without additional information related to the tradeoffs being made this may be an optimized way to do whatever Brave is doing which requires the 70MB of RAM. Perhaps the non-optimal way it was previously doing it required 250MB of RAM and this is a significant improvement.

                                                                                                                • KoolKat23 2 days ago

                                                                                                                  Yes it can be construed as wasteful. But it's exactly that, a compromise. Could the programmer spend their time better elsewhere generating better value, not doing so is also wasteful.

                                                                                                                  Supply and demand will decide what compromise is acceptable and what that compromise looks like.

                                                                                                                • ToucanLoucan 2 days ago

                                                                                                                  > It's still early stages, that is why.

                                                                                                                  I have been hearing (reading?) this for a solid two years now, and LLMs were not invented two years ago: they are ostensibly the same tech as they were back in 2017, with larger training pools and some optimizations along the way. How many more hundreds of billions of dollars is reasonable to throw at a technology that has never once exceeded the lofty heights of "fine"?

                                                                                                                  At this point this genuinely feels like silicon valley's fever dream. Just lighting dumptrucks full of money on fire in the hope that it does something better than it did the previous like 7 or 8 times you did it.

                                                                                                                  And normally I wouldn't give a shit, money is made up and even then it ain't MY money, burn it on whatever you want. But we're also offsetting any gains towards green energy standing up these stupid datacenters everywhere to power this shit, not to mention the water requirements.

                                                                                                                  • SamPatt 2 days ago

                                                                                                                    The difference between using Cursor when it launched and using Cursor today is dramatically different.

                                                                                                                    It was basically a novelty before. "Wow, AI can sort of write code!"

                                                                                                                    Now I find it very capable.

                                                                                                                    • player1234 7 hours ago

                                                                                                                      Trillions different?

                                                                                                                    • KoolKat23 2 days ago

                                                                                                                      I know from my own use case, it went from Gemini 1.5 being unusable to Gemini 2.0 being useable. So 2 years makes a big difference. It's out there right now being used in business making money. This is tangible.

                                                                                                                      I suspect there's a lot more use out there generating money than you realize, there's no moat in using it, so I'm pretty sure it's kept on the downlow for fear of competitors catching up (which is quick and cheap to do).

                                                                                                                      How far can one extrapolate? I defer to the experts actually making these things and to those putting money on the line.

                                                                                                                    • vrighter a day ago

                                                                                                                      I hate this "early stages" argument. It either works, or it doesn't. If it works sometimes that's called "alpha software" and should not be released and hyped as a finished product. The early stages of the GUI paradigm at release started with a fully working GUI. Windows didn't sometimes refuse to open. The OS didn't sometimes open the wrong program. The system didn't sometimes hallucinate a 2nd cursor. The system worked and then it was shipped.

                                                                                                                      The "early stages" argument means "not fit for production purposes" in any other case. It should also mean the same here. It's early stages because the product isn't finished (and can't be, at least with current knowledge)

                                                                                                                    • elAhmo 2 days ago

                                                                                                                      It is an abstraction.

                                                                                                                      Just because you end up looking at what the prompt looks like “under the hood” in whichever language it produced the output, doesn’t mean every user does.

                                                                                                                      Similar as with assembly, you might have not taken a look at it, but there are people that do and could argue the same thing as you.

                                                                                                                      The lines will be very blurry in the near future.

                                                                                                                      • card_zero 2 days ago

                                                                                                                        I can fart in the general direction of the code and that's a kind of abstraction too. It distills my intent down to a simple raspberry noise which could then be interpreted by sufficiently intelligent software.

                                                                                                                        • joenot443 2 days ago

                                                                                                                          Sort of like when people argue about "what is art?", some folks find it clever to come up with the most bizarre of examples to point to, as if the entire concept rests on the existence of its members at the fringe.

                                                                                                                          Personally, I think if your farts are an abstraction that you can derive useful meaning from the mapping, who are we to tell you no?

                                                                                                                          • card_zero 2 days ago

                                                                                                                            So, can derive useful meaning from is the point of contention.

                                                                                                                            (Also: bizarre examples = informative edge cases. Sometimes.)

                                                                                                                          • Phemist 2 days ago

                                                                                                                            The final result of course depending on how many r's there are in the raspberry.

                                                                                                                            • chaps 2 days ago

                                                                                                                              Could you please expand on this thought? I'm curious where this abstraction's inflection points are.

                                                                                                                            • rsynnott 2 days ago

                                                                                                                              > Just because you end up looking at what the prompt looks like “under the hood” in whichever language it produced the output, doesn’t mean every user does.

                                                                                                                              > Similar as with assembly, you might have not taken a look at it, but there are people that do and could argue the same thing as you.

                                                                                                                              ... No. The assembler is deterministic. Barring bugs, you can basically trust that it does exactly what it was told to. You absolutely cannot say the same of our beloved robot overlords.

                                                                                                                            • CuriouslyC 2 days ago

                                                                                                                              I like to think of it as a duck-typed abstraction. I have formal specs which guarantee certain things about my software, and the agent has to satisfy those (including performance, etc). Given that, the code the agent generates is essentially fungible in the manner of machine code.

                                                                                                                              • sarchertech 2 days ago

                                                                                                                                Only if you can write specs precisely enough. For those of us who remember the way software used to be built, we learned that this is basically impossible and that English is a terrible language to even attempt it in.

                                                                                                                                If you do make your specs precise enough, such that 2 different dev shops will produce functionally equivalent software, your specs are equivalent to code.

                                                                                                                                • CuriouslyC 2 days ago

                                                                                                                                  This is doable, I have a multi-stage process that makes it pretty reliable. Stage 1 is ideation, this can be with a LLM or humans, w/e, you just need a log. Stage 2 is conversion of that ideation log to a simple spec format that LLMs can write easily called SRF, which is fenced inside a nice markdown document humans can read and understand. You can edit that SRF if desired, have a conversation with the agent about it to get them to massage it, or just have your agent feed it into a tool I wrote which takes a SRF and converts it to a CUE with full formal validation and lots of other nice features.

                                                                                                                                  The value of this is that FOR FREE you can get comprehensive test defintions (unit+e2e), kube/terraform infra setup, documentation stubs, openai specs, etc. It's seriously magical.

                                                                                                                                  • sarchertech 2 days ago

                                                                                                                                    Imagine that when you deploy you have an LLM that regenerates code based on your specs, since code is fungible as long as it fits the spec.

                                                                                                                                    Keeping in mind that I have seen hundreds to thousands of production errors in applications with very high coverage test suites?

                                                                                                                                    How many production errors would you expect to see over 5 years of LLM deployments.

                                                                                                                                    • CuriouslyC 2 days ago

                                                                                                                                      Maybe more, but agents can also respond to errors much more quickly, and agents have the ability to manage small staged rollouts with monitoring to catch an issue before it goes global, so your actual downtime will be much better at the same time that you're producing code way faster.

                                                                                                                                      • sarchertech 2 days ago

                                                                                                                                        My magic potion will make you much faster. It will also make you crap your pants all the time. But don’t worry I have a magic charm that I can sell that will change colors to alert you that you’ve shat yourself, and since you’re faster you can probably change your pants before anyone important notices.

                                                                                                                              • joenot443 2 days ago

                                                                                                                                "SwiftUI red circle with a white outline 100px"

                                                                                                                                ``` Circle() .fill(Color.red) .overlay( Circle().stroke(Color.white, lineWidth: 4) ).frame(width: 100, height: 100) ```

                                                                                                                                Is the mapping 1:1 and completely lossless? Of course not, I'd say the former is most definitely a sort of abstraction of the latter, and one would be being disingenuous to pretend it's not.

                                                                                                                                • theptip 2 days ago

                                                                                                                                  > and it never will be

                                                                                                                                  The only thing I’m certain of is that you’re highly overconfident.

                                                                                                                                  I’m sure plenty of assembly gurus said the same of the first compilers.

                                                                                                                                  > because it's not an abstraction

                                                                                                                                  This just seems like a category error. A human is not an abstraction, yet they write code and produce value.

                                                                                                                                  An IDE is a tool not an abstraction, yet they make humans more productive.

                                                                                                                                  When I talk about moving up the levels of abstraction I mean: taking on more abstract/less-concrete tasks.

                                                                                                                                  Instead of “please wire up login for our new prototype” it might be “please make the prototype fully production-ready, figure out what is needed” or even “please ship a new product to meet customer X’s need”.

                                                                                                                                  • sarchertech 2 days ago

                                                                                                                                    >“please ship a new product to meet customer X’s need”

                                                                                                                                    The customer would just ask the AI directly to meet their needs. They wouldn’t purchase the product from you.

                                                                                                                                • chii 2 days ago

                                                                                                                                  > my only job is to identify places where the current spec and the intended functionality differ and create a new spec to mitigate.

                                                                                                                                  and to be able to do this efficiently or even "correctly", you'd need to have had mountains of experience evaluating an implementation, and be able to imagine the consequences of that implementation against the desired outcome.

                                                                                                                                  Doing this requires experience that would get eroded by the use of an LLM. It's very similar to higher level maths (stuff like calculus) being much more difficult if you had poor arithmetic/algebra skills.

                                                                                                                                  • CuriouslyC 2 days ago

                                                                                                                                    Would that experience get eroded though? LLMs let me perform experiments (including architecture/system level) quickly, and build stress test/benchmark/etc harnesses quickly to evaluate those experiments, so in the time you can build human intuition with one experiment I've done 10. I build less intuition from each experiment, but I'm building broader intuition, and if I choose a bad experiment it's a small cost, but choosing a bad experiment and performing it manually is brutal.

                                                                                                                                    • jplusequalt 2 days ago

                                                                                                                                      >Would that experience get eroded though?

                                                                                                                                      Yes. If you stop doing something, you get worse at it. There is literally no exception to this that I'm aware of. In the future where everyone is dependent on ever larger amounts of code, the possibility that nobody will be equipped to write/debug that code should scare you.

                                                                                                                                      • CuriouslyC 2 days ago

                                                                                                                                        The amount of weightlifting a strength athlete needs to do to stay near their peak (but outside medal range) is ~15% of a full training workload. People can play instruments once a month and still be exceptional once the pathways are set down. Are you getting slightly worse at direct code jockeying? Sure, but not a lot, and you're getting superpowers in exchange.

                                                                                                                                        • jplusequalt 2 days ago

                                                                                                                                          >and you're getting superpowers in exchange

                                                                                                                                          The superpower you speak of is to become a product manager, and lose out on the fun of problem solving. If that's the future of tech, I want nothing to do with it.

                                                                                                                                          • jplusequalt 2 days ago

                                                                                                                                            Both of these examples have much more to do with muscle memory than critical thinking.

                                                                                                                                    • soraminazuki 2 days ago

                                                                                                                                      Why is every discussion about AI like this? We get study after study and example after example showing that AI is unreliable [1], hurts productivity [2], and causes cognitive decline [3]. Yet, time and time again, a group of highly motivated individuals show up and dismiss these findings, not with counter-evidence, but through sheer sophistry. Heck, this is a thread meant to discuss evidence about LLM reliance contributing to cognitive decline. Instead, the conversation quickly derailed into an absurd debate about whether AI coding tools resemble compilers. It's exhausting.

                                                                                                                                      [1]: https://arxiv.org/abs/2401.11817

                                                                                                                                      [2]: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

                                                                                                                                      [3]: https://publichealthpolicyjournal.com/mit-study-finds-artifi...

                                                                                                                                      • notTooFarGone 2 days ago

                                                                                                                                        If you had google maps and you knew the directions it gives are 80% gonna be correct, would you still need navigation skills?

                                                                                                                                        You could also tweak it by going like "Lead me to the US" -> "Lead me to the state of New York" -> "Lead me to New York City" -> "Lead me to Manhattan" -> "Lead me to the museum of new arts" and it would give you 86% accurate directions, would you still need to be able to navigate?

                                                                                                                                        How about when you go over roads that are very frequently used you push to 92% accuracy, would you still need to be able to navigate?

                                                                                                                                        Yes of course because in 1/10 trips you'd get fucking lost.

                                                                                                                                        My point is: unless you get to that 99% mark, you still need the underlying skill and the abstraction is only a helper and always has to be checked by someone who has that underlying skill.

                                                                                                                                        I don't see LLMs as that 99% solution in the next years to come.

                                                                                                                                        • lr4444lr 2 days ago

                                                                                                                                          I'd be very cautious about calling LLM output "an abstraction layer" in software.

                                                                                                                                          • matthewdgreen 2 days ago

                                                                                                                                            It is possible to use LLMs this way. If you're careful. But at every place where you can use LLMs to outsource mechanical tasks, there will also be a temptation to outsource the full stack of conceptual tasks that allow you to be part of what's going on. This will create gaps where you're not sitting at any level of the abstraction hierarchy, you're just skipping parts of the system. That temptation will be greater for less experienced people, and for folks still learning the field. That's what I'm scared about.

                                                                                                                                            • aprilthird2021 2 days ago

                                                                                                                                              > We're just moving up the abstraction ladder, like we did with compilers.

                                                                                                                                              We're not because you have to still check every outputted code. You didn't have to check every compilation step of a compiler. It was testable actual code, not non-deterministic output from English language input

                                                                                                                                              • elAhmo 2 days ago

                                                                                                                                                I would bet a significant amount of money that many LLM users don’t check the output. And as tools improve, this will only increase.

                                                                                                                                                The number of users actually checking the output of a compiler is nonexistent. You just trust it.

                                                                                                                                                LLMs are moving that direction, whether we like it or not

                                                                                                                                                • Jensson 2 days ago

                                                                                                                                                  > The number of users actually checking the output of a compiler is nonexistent. You just trust it.

                                                                                                                                                  Quite a few who work on low level systems do this. I have done this a few times to debug build issues: this one time a single file suddenly made compile times go up by orders of magnitude, the compiler inlined a big sort procedure in an unrolled loop, so it added the sorting code hundreds of times over in a single function and created a gigantic binary that took ages to compile since it tried to optimize that giant function.

                                                                                                                                                  That is slow both in runtime and compile time, so I added a tag to not inline the sort there, and all the issues disappeared. The sort didn't have a tag to inline it, so the compiler just made an error here, it shouldn't have inlined such a large function in an unrolled loop.

                                                                                                                                                  • aprilthird2021 2 days ago

                                                                                                                                                    Of course they don't. That's why things like the NX breach happen. That's also why they don't learn anything when they use these tools and their brains stagnate.

                                                                                                                                                    • __loam 2 days ago

                                                                                                                                                      Well they're not improving that much anymore. That's why Sam Altman is out there saying it's a bubble.

                                                                                                                                                      • CuriouslyC 2 days ago

                                                                                                                                                        This is incorrect, they are improving, you just don't understand how to measure and evaluate it.

                                                                                                                                                        The Chinese models are getting hyper efficient and really good at agentic tasks. They're going to overtake Claude as the agentic workhorses soon for sure, Anthropic is slow rolling their research and the Chinese labs are smoking. Speed/agentic ability don't show big headlines, but they really matter.

                                                                                                                                                        GPT5 might not impress you with its responses to pedestrian prompts, but it is a science/algorithm beast. I understand what Sam Altman was saying about how unnerving its responses can be, it can synthesize advanced experiments and pull in research from diverse areas to improve algorithms/optimize in a way that's far beyond the other LLMs. It's like having a myopic autistic savant postdoc to help me design experiments, I have to keep it on target/focused but the depth of its suggestions are pretty jaw dropping.

                                                                                                                                                    • pessimizer 2 days ago

                                                                                                                                                      > We're not because you have to still check every outputted code.

                                                                                                                                                      To me, that's what makes it an abstraction layer, rather than just a servant or an employee. You have to break your entire architecture into units small enough that you know you can coax the machine to output good code for. The AI can't be trusted as far as you can throw it, but the distance from you to how far you can throw is the abstraction layer.

                                                                                                                                                      An employee you can just tell to make it work, they'll kill themselves trying to do it, or be replaced if they don't; eventually something will work, and you'll take all the credit for it. AI is not experimenting, learning and growing, it stays stupid. The longer it thinks, the wronger it thinks. You deserve the credit (and the ridicule) for everything it does that you put your name on.

                                                                                                                                                      -----

                                                                                                                                                      edit: and this thread seems to think that you don't have to check what your high level abstraction is doing. That's probably why most programs run like crap. You can't expect something you do in e.g. python to do the most algorithmically sensible thing, even if you wrote the algorithm just like the textbook said. It may make weird choices (maybe optimal for the general case, but horrifically bad for yours) that mean that it's not really running your cute algorithm at all, or maybe your cute algorithm is being starved by another thread that you have no idea why it would be dependent on. It may have made correct choices when you started writing, then decided to make wrong choices after a minor patch version change.

                                                                                                                                                      To pretend perfection is a necessary condition for abstraction is not even somebody would say directly. Never. All we talk about is leaky abstractions.

                                                                                                                                                      Remember when GTA loading times, which (a counterfactual because we'll never know) probably decimated sales, playtime, and at least the marketing of the game, turned out to be because they were scanning some large, unnecessary json array (iirc) hundreds of times a second? That's probably a billion dollar mistake. Just because some function that was being blindly called was not ever reexamined, and because nobody profiled properly (i.e. checked the output.)

                                                                                                                                                    • bdhcuidbebe 2 days ago

                                                                                                                                                      > We're just moving up the abstraction ladder, like we did with compilers.

                                                                                                                                                      Got any studies about reasoning decline from using compilers to go with your claim?

                                                                                                                                                      • __loam 2 days ago

                                                                                                                                                        Leakiest abstraction of all time.

                                                                                                                                                        • soraminazuki 2 days ago

                                                                                                                                                          That's untrue, as many others have pointed out. Furthermore, compilers don't cause the kind of cognitive decline that's shown in the study.

                                                                                                                                                          • bgwalter 2 days ago

                                                                                                                                                            Compilers are sound abstractions. CompCert is literally a proven compiler.

                                                                                                                                                            LLMs make up whatever they feel like and are pretty bad at architecture as well.

                                                                                                                                                            • rkagerer a day ago

                                                                                                                                                              Good luck once you find out how impossible it is to maintain all that AI generated slop, over the long term.

                                                                                                                                                              • MobiusHorizons 2 days ago

                                                                                                                                                                At some point details _will_ matter. Usually when you don't want them to like during an outage, or at a very late stage blocker on an already delayed project. What you don't understand, you can't plan for.

                                                                                                                                                              • defgeneric 2 days ago

                                                                                                                                                                A lot of the boosterism seems to come from those who never had the ability in the first place, and never really will, but can now hack a demo together a little faster than before. But I'm mostly concerned about those going through school who don't even realize they're undermining themselves by reaching for AI so quickly.

                                                                                                                                                                • NoGravitas 2 days ago

                                                                                                                                                                  Perhaps more importantly, those boosters may never have had the ability to really model a problem in the first place, and didn't miss it, because muddling through worked well enough for them. Many such cases.

                                                                                                                                                                • mooreds 2 days ago

                                                                                                                                                                  I've posted this Asimov short story before, but this comment inspires me to post it again.

                                                                                                                                                                  http://employees.oneonta.edu/blechmjb/JBpages/m360/Professio...

                                                                                                                                                                  "Somewhere there must be men and women with capacity for original thought."

                                                                                                                                                                  He wrote that in 1957. 1957!

                                                                                                                                                                  • Terr_ 2 days ago

                                                                                                                                                                    The first sentence made me expect Asimov's "The Feeling of Power", which--avoiding spoilers--regards the (over-)use of calculators by a society.

                                                                                                                                                                    However, since I brought up calculators, I'd like to pre-emphasize something: They aren't analogous to today's LLMs. Most people don't offload their "what and why" executive decision-making to a calculator, calculators are orders of magnitude more trustworthy, and they don't emit plausible lies to cover their errors... Though that last does sound like another short-story premise.

                                                                                                                                                                    • zem 2 days ago

                                                                                                                                                                      perhaps my favourite of his short stories. there's also the more satirical "the feeling of power", which touches on the same theme https://ia800806.us.archive.org/20/items/TheFeelingOfPower/T...

                                                                                                                                                                      • ducttapecrown 2 days ago

                                                                                                                                                                        It turns out incremental thought is much better than original thought. I guess.

                                                                                                                                                                      • el_benhameen 2 days ago

                                                                                                                                                                        What about people who use the machines to augment their learning process? I find that being able to ask questions, particularly “dumb” questions that I don’t want to bother someone else with and niche questions that might not be answered well in the corpus, helps me better understand new concepts. If you just take the answers and move on, then sure, you’re going to have a bad time. But if you critically interrogate the answers and synthesize the information, I don’t see how this isn’t a _better_ era for people who want to develop a deep understanding of something.

                                                                                                                                                                        • aprilthird2021 2 days ago

                                                                                                                                                                          > I don’t see how this isn’t a _better_ era for people who want to develop a deep understanding of something.

                                                                                                                                                                          Same way a phone in your pocket gives you the world's compiled information available in a moment. But that's generally led to loneliness, isolation, social upheaval, polarization, and huge spread of wrong information.

                                                                                                                                                                          If you can handle the negatives is a big if. Even the smartest of our professional class are addicted to doomscrolling these days. You think they will get the positives of AI use only and avoid the negatives?

                                                                                                                                                                          • prewett 2 days ago

                                                                                                                                                                            You conflated two things. Availability to the world's information did not produce loneliness, etc., that was availability of social networks designed for engagement^Waddiction on that same device that did. You don't need to install FB et all on your phone.

                                                                                                                                                                            • aprilthird2021 2 days ago

                                                                                                                                                                              Again, most people have FB on their phone. The vast majority of people will not be like snowflake HN anecdotes who claim to only get the positives and not the negatives of technology

                                                                                                                                                                          • Peritract 2 days ago

                                                                                                                                                                            There's a difference between learning and the perception/appearance of learning; teachers need to manage this in classrooms, but how do you manage it on your own?

                                                                                                                                                                            • el_benhameen 2 days ago

                                                                                                                                                                              I don’t think this is a critique of llms so much as a general observation that actual deep learning is difficult.

                                                                                                                                                                              I’ve read plenty of books (thanks, Dickens) where I looked at every word on every page but can recall very little of what they meant. You can look at the results from an llm and say “huh cool, I know that now) and do nothing to assimilate that knowledge, or you can think deeply about it and try to fit it in with everything else you know about the subject. The advantage here is that you can ask follow-up questions if something doesn’t click.

                                                                                                                                                                              • Peritract 2 days ago

                                                                                                                                                                                It's not a critique of LLMs, but it is a reason to be wary of the claim that it really helps you learn.

                                                                                                                                                                                We have the idea of 'tutorial hell' for programming (particularly gamedev), where people go through the motions of learning without actually progressing.

                                                                                                                                                                                Until you go apply the skills and check, it's hard to evaluate the effectiveness of a learning method.

                                                                                                                                                                            • benterix 2 days ago

                                                                                                                                                                              I fully agree. Using LLMs for learning concepts is great if you combine it with actively using/testing your knowledge. But outsourcing your tasks to an LLM makes your inner muscles weaker.

                                                                                                                                                                            • keepamovin 2 days ago

                                                                                                                                                                              That reminds me of back when 12,500 years ago I could really shape a flint into a spear head in no time. Took me seasons to learn that skill from the Masters. Then Sky Gods came and taught people metal-fire. Nobody knows how to really chip and carve a point any more. They just cast them in moulds. We are seeing a literal split of skills in front of our eyes. People who know how to shape rocks deeply. And people who just know how to buy a metal tool with berries. I like berries as much as the next person, but give it a couple of years, they will be begging for flint tips back. I guarantee it. Those metal users will have no idea how to survive without a collective industry.

                                                                                                                                                                              • intended 2 days ago

                                                                                                                                                                                Ah you are an old one. I was born later, and missed this phase of our history.

                                                                                                                                                                                This reminds me of back 11,500 years ago, when people used to worship the sharper or bigger pieces of obsidian. They felt the biggest piece would win them the biggest hunt.

                                                                                                                                                                                They forgot that the size of the tool mattered less than mastery of the hunt. Why the best hunter could take down a moving mattress with just the right words, string and a cliff.

                                                                                                                                                                                • kamaal a day ago

                                                                                                                                                                                  On a more serious note, here in India most millennials/boomers remember taking a Engineering Drawing class in second semester Engineering courses. It would involve using a real drafter, real calculations and making real drawings. Isometric projects and all.

                                                                                                                                                                                  I remember it took me like 4 nights of standing to make Isometric projections of a landing gear strut. I wondered if pursuing an Engineering degree was even worth it. Some of my classmates did quit, as years went by.

                                                                                                                                                                                  These days they just let you use CAD software to make things work, and based on what I hear kids just Copy paste files and are done with the assignments.

                                                                                                                                                                                  I mean we all have these Kids these days talk, but somethings do matter. Making all these tasks easy has allowed lots of people who would have other wise failed in the previous generations pass.

                                                                                                                                                                                  There is now an unemployment and low pay crisis all over India due to so many Engineers passing. Sometimes when I hear the newer generations complain about how hard it is to buy a home, or get a good job. Im inclined to think, perhaps hard things should have been kept hard for a reason.

                                                                                                                                                                                  • intended a day ago

                                                                                                                                                                                    CAD vs Hand drawing isometrics spotlights the labor and effort, but then ends up overshadowing the role of effort in developing mental models.

                                                                                                                                                                                    The issue is that its a homework exercise. It's goal is to help you practice thinking about the problem. The Indian system is clear proof that passing an exam is easier, than actually mastering the subject being tested.

                                                                                                                                                                                    However, this is not the cause of the jobs crisis. That is simply because there are not enough jobs which can provide income and social mobility. That is why we needed growth.

                                                                                                                                                                                    Some of those ladders have been removed because automation has removed low-skill labor roles. Now we are going to remove entry level roles.

                                                                                                                                                                                    To put it in a crude manner - humanity's "job" today, seems to be "growing" a generation of humans over a 20 year time span, to prepare them for the world that we face.

                                                                                                                                                                                    This means building systems that deliver timely nutrition, education, stimulation, healthcare, and support.

                                                                                                                                                                                    We could do this better.

                                                                                                                                                                              • Eawrig05 a day ago

                                                                                                                                                                                I think this is a fun problem: Say LLMs really do allow people to be 2X or 3X more productive. Then those people that understand concepts and rely on LLMs should be more successful and productive. But if those people slowly decline in ability they will become less successful in the long term. But how much societal damage is done in the meantime? Such as creating insecure software or laying off/not hiring junior developers.

                                                                                                                                                                                • stocksinsmocks 2 days ago

                                                                                                                                                                                  (3) people who never had that capability.

                                                                                                                                                                                  Remember we aren’t all above average. You shouldn’t worry. Now that we have widespread literacy, nobody needs to and few even could recite Norse Sagas or the Illiad from memory. Basically nobody has useful skills for nomadic survival.

                                                                                                                                                                                  We’re about to move on to more interesting problems, and our collective abilities and motivation will still be stratified as it always has been and must be.

                                                                                                                                                                                  • rf15 2 days ago

                                                                                                                                                                                    I don't think it's down to ability, I think it's down to the decision not to learn itself. And that scares me.

                                                                                                                                                                                    • stocksinsmocks 2 days ago

                                                                                                                                                                                      Let them.

                                                                                                                                                                                      -Your Friend Mel (probably)

                                                                                                                                                                                    • jplusequalt 2 days ago

                                                                                                                                                                                      >We’re about to move on to more interesting problems, and our collective abilities and motivation will still be stratified as it always has been and must be

                                                                                                                                                                                      Who is "we"? There are more people out there in the world doing hard physical labor, or data entry, than there are software engineers.

                                                                                                                                                                                    • skissane 2 days ago

                                                                                                                                                                                      The way I’ve been coding recently: I often ask the AI to write the function for me, because lazy. And then I don’t like the code it wrote, so I rewrite it. But still it is less mental effort to rewrite “mostly correct but slightly ugly code” than write the whole thing from scratch

                                                                                                                                                                                      Also I even though I have Copilot extension in VSCode I rarely use it… because I find it interrupts my flow with constant useless or incorrect or unwanted suggestions. Instead, when I want AI help, I type out my request by hand into a Gemini gem which contains a prompt describing my preferred coding style - but even with extra guidance as to how I want it to write code, I still often don’t like what it does and end up rewriting it

                                                                                                                                                                                      • squigz 2 days ago

                                                                                                                                                                                        The loss of expertise does not mean the loss of a need for expertise. We'll still need experts. We'll also still need people who are... not experts. To that point...

                                                                                                                                                                                        > For now the difference between these two populations is not that pronounced yet but give it a couple of years.

                                                                                                                                                                                        There are lots and lots of programmers and other IT people who make a living that I wouldn't say fall into your first bucket.

                                                                                                                                                                                        • mrits 2 days ago

                                                                                                                                                                                          I suppose the question is if we need to understand the concepts deeply. I'm not sure many of us did to begin with and we have shipped a lot of code.

                                                                                                                                                                                          • MobiusHorizons 2 days ago

                                                                                                                                                                                            I see a lot of junior engineers, or more senior engineers who are outside their areas of expertise try to prioritize making progress without taking the time to understand. They will copy examples very closely, following best practices blindly, or get someone else to make the critical design decisions. This can get you surprisingly far, but it’s also dangerous because the further past your understanding you are operating, the more you might have to learn all at once in order to fix something when it breaks. Debugging challenges all your assumptions, and if you don’t have models of what the pieces are and how they interact, it’s incredibly hard to start building them when something is already broken. Even then, some engineers don’t learn the models when debugging and resort to defensive and superstitious behaviors based on whatever solution they stumbled on last time. This is a pretty normal part of the learning process, but some engineers don’t seem to get past this stage. Some don’t even want to.

                                                                                                                                                                                            • tmcb 2 days ago

                                                                                                                                                                                              Well, cargo cult programming is definitely a thing, and has been for a long time. It may “deliver value”, but it is not guaranteed. I believe entrepreneurs have an easier time having AI do the work for them because their value assessment framework is decoupled from code generation proper.

                                                                                                                                                                                            • InfamousRece 2 days ago

                                                                                                                                                                                              > slowly, slowly lose that capability.

                                                                                                                                                                                              Well, not so slowly it seems.

                                                                                                                                                                                              • pengaru 2 days ago

                                                                                                                                                                                                > and (2) people who outsource it to a machine and slowly, slowly lose that capability.

                                                                                                                                                                                                What I'm seeing is most of this group never really had the capability in the first place. These are the formerly unproductive slackers who now churn out GenAI slop with their name on it at an alarming rate.

                                                                                                                                                                                                • noboostforyou 2 days ago

                                                                                                                                                                                                  And it's almost the same exact trend in skill atrophy we saw between millennial and gen z/alpha where most kids these days cannot do basic computer troubleshooting even if they've owned a computer/smartphone practically their entire lives.

                                                                                                                                                                                                  • senectus1 10 hours ago

                                                                                                                                                                                                    I think there is a third split as well, those that use the tool to learn. No vibe coding, asking it questions to explain things to you. Then switching to manual tasks, cutting back to help decipher error messages etc.

                                                                                                                                                                                                    Its like having a your own personal on tap tutor (for free in most cases!).

                                                                                                                                                                                                    • rewgs a day ago

                                                                                                                                                                                                      100%. I'm _very_ hesitant to use LLMs _at all_ when programming, as the speed/learning trade-off so far hasn't proved itself to be worth it (especially given that it often _isn't_ faster on balance).

                                                                                                                                                                                                      However, one place where LLMs have proved to be incredibly helpful is with build tools, dependency hell, etc. Lately I've been trying to update a decade-old Node/Electron project to modern packages and best-practices, and holy hell I simply was not making any meaningful progress until I turned to Claude.

                                                                                                                                                                                                      The JS world simply moves too fast (especially back when this project was written) to make updating a project like that even remotely possible in any reasonable amount of time. I was tearing my hair out for days, but yesterday I was finally able to achieve what I was wanting in a few hours with Claude. I still had to work slowly and methodically, and Claude made more than a few stupid errors along the way, but dealing with the delicate version balancing that coincided with API and import/export style changes, all the changes in the bundler world, etc simply could not have been done without it. It's the first time that I was 100% glad I relied on an LLM and felt that it was precisely the right tool for the job at hand.

                                                                                                                                                                                                      • segmondy 2 days ago

                                                                                                                                                                                                        And 3, people who are about to use AI to build a mental model quickly and understand the concept deeper

                                                                                                                                                                                                        • chaps 2 days ago

                                                                                                                                                                                                            > (1) people who are able to understand the concepts deeply, build a mental model of it and implement them in code at any level, and (2) people who outsource it to a machine and slowly, slowly loose that capability.
                                                                                                                                                                                                          
                                                                                                                                                                                                          ...is it really only going to be these two? No middle ground, gradient, or possibly even a trichotomous variation of your split?

                                                                                                                                                                                                            > loose that capability
                                                                                                                                                                                                          
                                                                                                                                                                                                          You mean "lose". ;)
                                                                                                                                                                                                          • benterix 2 days ago

                                                                                                                                                                                                            Well, in real life you always have a gradient (or, more usually, a normal distribution) - actually it would be interesting to understand what the actual distribution is and how it changes with time.

                                                                                                                                                                                                            • chaps 2 days ago

                                                                                                                                                                                                              It would! I'd be very interested in seeing the shape and outliers. Like, are there some folk who actually see improvements? Or who don't see any brain reprogramming at all? Do different methods of interacting with these systems have the same reprogramming effect? etc etc

                                                                                                                                                                                                          • andy99 2 days ago

                                                                                                                                                                                                            I think the bigger issue is outsourcing all or most thought to a LLM (data labeler really). Using tools you don't understand from first principles has always been commonplace and not really an issue. But not thinking anymore is new for most of us.

                                                                                                                                                                                                          • theptip 2 days ago

                                                                                                                                                                                                            Just beware the “real programmers hand-write assembly” fallacy. It was said that compilers would produce a generation of programmers unable to understand the workings of their programs. In some sense, this is true! But, almost nobody thinks it really matters for the actual project of building things.

                                                                                                                                                                                                            If you stop thinking, then of course you will learn less.

                                                                                                                                                                                                            If instead you think about the next level of abstraction up, then perhaps the details don’t always matter.

                                                                                                                                                                                                            The whole problem with college is that there is no “next level up”, it’s a hand-curated sequence of ideas that have been demonstrated to induce some knowledge transfer. It’s not the same as starting a company and trying to build something, where freeing up your time will let you tackle bigger problems.

                                                                                                                                                                                                            And of course this might not work for all PhDs; maybe learning the details is what matters in some fields - though with how specialized we’ve become, I could easily see this being a net win.

                                                                                                                                                                                                            • PhantomHour 2 days ago

                                                                                                                                                                                                              > Just beware the “real programmers hand-write assembly” fallacy. It was said that compilers would produce a generation of programmers unable to understand the workings of their programs. In some sense, this is true! But, almost nobody thinks it really matters for the actual project of building things.

                                                                                                                                                                                                              One of the other replies alludes to it, but I want to say it explicitly:

                                                                                                                                                                                                              The key difference is that you can generally drill down to assembly, there is infinitely precise control to be had.

                                                                                                                                                                                                              It'd be a giant pain in the ass, and not particularly fast, but if you want to invoke some assembly code in your Java, you can just do that. You want to see the JIT compiler's assembly? You can just do that. JIT Compiler acting up? Disable it entirely if you wish for more predictable & understandable execution of the code.

                                                                                                                                                                                                              And while people used to higher level languages don't know the finer details of assembly or even C's memory management, they can incrementally learn. Assembly programming is hard, but it is still programming and the foundations you learn from other programming do help you there.

                                                                                                                                                                                                              Yet AI is corrosive to those foundations.

                                                                                                                                                                                                              • theptip 2 days ago

                                                                                                                                                                                                                I don't follow; you can read the code that your LLM produces as well.

                                                                                                                                                                                                                It's way easier to drill down in this way than the bytecode/assembly vs. high-level language divide.

                                                                                                                                                                                                                • rstuart4133 2 days ago

                                                                                                                                                                                                                  > I don't follow; you can read the code that your LLM produces as well.

                                                                                                                                                                                                                  You can. You can also read the code a compiler produces perfectly well. In fact https://godbolt.org/ is a web site dedicated to programmers do just that. But ... how many programmers do you know who look at the assembler their compiler produces? In fact how many programmers do you know who understand the assembler?

                                                                                                                                                                                                                  Now lets extrapolate a bit. I've seen people say they've vibe coded a some program, yet they can't program. Did they read the code the LLM produced? Of course not. Did it matter? Apparently not for the program they produced.

                                                                                                                                                                                                                  Does the fact that they can vide program but not read code alter the types of programs they can produce? Of course it does. There limited to the sort of programs an LLM has seen before. Does that matter? Possibly not if the only programs they write are minor variations of what has been posted onto the internet already.

                                                                                                                                                                                                                  Now take two people, one who can only vide code, and another who knows how to program and understands computers at a very deep level. Ask yourself, who is going to be paid more? Is it the one who can only write programs that have been seen many times before by an LLM, or is it the one who can produce something truly new and novel?

                                                                                                                                                                                                                  • yesbut 2 days ago

                                                                                                                                                                                                                    salary aside, the vibe coders are exposing themselves to increased cognitive decline which should be a strong enough incentive to avoid AI to begin with. maybe they already had a cognitive impairment before reading this MIT study and can't understand the risk.

                                                                                                                                                                                                                  • PhantomHour 2 days ago

                                                                                                                                                                                                                    The distinction is that you cannot make the LLM do the drilling. And the way these tools are designed is to train the user to use the LLM rather than their own brain, so they'll never learn it themselves.

                                                                                                                                                                                                                    A big problem with the "Just read the code" approach is that reading the code at the level deep enough to truly understand it is at minimum equally time-consuming than writing the code in the first place. (And in practice tends to be significantly worse) Anyone who claims they're reading the LLM's code output properly is on some level lying to them.

                                                                                                                                                                                                                    Human brains are simply bad at consistently monitoring output like that, especially if the output is consistently "good", especially especially when the errors appear to be "good" output on the surface level. This is universal across all fields and tools.

                                                                                                                                                                                                                  • Cthulhu_ 9 hours ago

                                                                                                                                                                                                                    The other one is that code to assembly is exact and repeatable. Code will (well, should, lmao) behave the same way, every time. A prompt to generate code won't.

                                                                                                                                                                                                                    Some prompts / AI agents will write all the validations and security concerns when prompted to write an API endpoint (or whatever). Others may not, because you didn't specify it.

                                                                                                                                                                                                                    But if someone who doesn't actually know about security just trusts that the AI will just do it for you - like how a developer using framework might - you'll run into issues fast.

                                                                                                                                                                                                                  • Jensson 2 days ago

                                                                                                                                                                                                                    > Just beware the “real programmers hand-write assembly” fallacy

                                                                                                                                                                                                                    All previous programming abstractions kept correctness, a python program produce no less reliable results than a C program running the same algorithm, it just took more time.

                                                                                                                                                                                                                    LLM doesn't keep correctness, I can write a correct prompt and get incorrect results. Then you are no longer programming, you are a manager over a senior programmer suffering from extreme dementia so they forget what they were doing a few minutes ago and you try to convince him to write what you want before he forgets about that as well and restart the argument.

                                                                                                                                                                                                                    • invalidptr 2 days ago

                                                                                                                                                                                                                      >All previous programming abstractions kept correctness

                                                                                                                                                                                                                      That's not strictly speaking true, since most (all?) high level languages have undefined behaviors, and their behavior varies between compilers/architectures in unexpected ways. We did lose a level of fidelity. It's still smaller than the loss of fidelity from LLMs but it is there.

                                                                                                                                                                                                                      • pnt12 2 days ago

                                                                                                                                                                                                                        That's a bit pedantic: lots of python programs will work the same way in major OSs. If they don't, someone will likely try to debug the specific error and fix it. But LLMs frequently hallucinate in non deterministic ways.

                                                                                                                                                                                                                        Also, it seems like there's little chance for knowledge transfer. If I work with dictionaries in python all the timrle, eventually I'm better prepared to go under the hood and understand their implementation. If I'm prompting a LLM, what's the bridge from prompt engineering to software engineering? Not such direct connection, surely!

                                                                                                                                                                                                                        • theptip 2 days ago

                                                                                                                                                                                                                          > That's a bit pedantic

                                                                                                                                                                                                                          It's a pedantic reply to a pedantic point :)

                                                                                                                                                                                                                          > If I'm prompting a LLM, what's the bridge from prompt engineering to software engineering?

                                                                                                                                                                                                                          A sibling also made this point, but I don't follow. You can still read the code.

                                                                                                                                                                                                                          If you don't know the syntax, you can ask the LLM to explain it to you. LLMs are great for knowledge transfer, if you're actually trying to learn something - and they are strongest in domains where you have an oracle to test your understanding, like code.

                                                                                                                                                                                                                        • ashton314 2 days ago

                                                                                                                                                                                                                          Undefined behavior does not violate correctness. Undefined behavior is just wiggle room for compiler engineers to not have to worry so much about certain edge cases.

                                                                                                                                                                                                                          "Correctness" must always be considered with respect to something else. If we take e.g. the C specification, then yes, there are plenty of compilers that are in almost all ways people will encounter correct according to that spec, UB and all. Yes, there are bugs but they are bugs and they can be fixed. The LLVM project has a very neat tool called Alive2 [1] that can verify optimization passes for correctness.

                                                                                                                                                                                                                          I think there's a very big gap between the kind of reliability we can expect from a deterministic, verified compiler and the approximating behavior of a probabilistic LLM.

                                                                                                                                                                                                                          [1]: https://github.com/AliveToolkit/alive2

                                                                                                                                                                                                                          • ndsipa_pomu 2 days ago

                                                                                                                                                                                                                            However, the undefined behaviours are specified and known about (or at least some people know about them). With LLMs, there's no way to know ahead of time that a particular prompt will lead to hallucinations.

                                                                                                                                                                                                                        • Cthulhu_ 9 hours ago

                                                                                                                                                                                                                          I think the real fear is that AI will generate code that is subtly broken, but people will lose the skills to understand why it's broken; a fear that it's too much abstraction. And the other difference is that code to assembly is a 'hard' conversion, extensively tested, verified, predictable, etc, while prompt-to-code is a 'loose' conversion, where repeating the same prompt in the same agent will cause different outcomes every time.

                                                                                                                                                                                                                          • nitwit005 2 days ago

                                                                                                                                                                                                                            I'd caution that the people not familiar with working at the low level are often missing a bunch of associated knowledge which is useful in the day to day.

                                                                                                                                                                                                                            You run into Python/Javascript/etc programers who have no concept of what operations might execute quickly or slowly. There isn't a mental model of what the interpreter is doing.

                                                                                                                                                                                                                            We're often insulated from the problem because the older generation often used fairly low level languages on very limited computers, and remember lessons from that era. That's not true of younger developers.

                                                                                                                                                                                                                            • Cthulhu_ 9 hours ago

                                                                                                                                                                                                                              Depends on the operation tbh, and whether the one or the other is a micro-optimization or actually significant. It's better to focus on high-level optimizations, core architecture decisions and the right algorithms than on an operation level. Unless those operations are executed billions of times and the difference becomes statistically significant, of course.

                                                                                                                                                                                                                            • daemin 2 days ago

                                                                                                                                                                                                                              I would agree with the statement that you don't need to know or write in assembly to build programs, but what you end up with is usually slow and inefficient.

                                                                                                                                                                                                                              Having curiosity to examine the platform that your software is running on and taking a look into what the compilers generate is a skill worth having. Even if you never write raw assembly yourself, being able to see what the compiler generated and how data is laid out does matter. This then helps you make better decisions about what patterns of code to use in your higher level language.

                                                                                                                                                                                                                              • MobiusHorizons 2 days ago

                                                                                                                                                                                                                                I have never needed to write assembly in a professional context because of the changes you describe, but this does not mean I don't have a need to understand what is going on at that level of abstraction. I _have_ had occasion to look at disassembly in the process of debugging before, and it was important that I was not completely lost when I had to do this. You don't have to do something all the time for the capacity to do something to be useful. At the end of the day engineering is about choosing the correct tradeoffs given constraints, and in a professional environment, cost is almost always one of the constraints.

                                                                                                                                                                                                                              • hintymad 2 days ago

                                                                                                                                                                                                                                This is in a way like doing math. I can read a math book all day and even appreciate the ideas in the book, but I'd practically learn little if I don't actually attempt to work out examples for the definitions, the theorems, and some exercises in the book.

                                                                                                                                                                                                                                • TheNewsIsHere 2 days ago

                                                                                                                                                                                                                                  I fall into this trap more than I’d care to admit.

                                                                                                                                                                                                                                  I love learning by reading, to the point that I’ll read the available documentation for something before I decide to use it. This consumes a lot of time, and there’s a tradeoff.

                                                                                                                                                                                                                                  Eventually if I do use the thing, I’m well suited to learning it quickly because I know where to go when I get stuck.

                                                                                                                                                                                                                                  But by the same token I read a lot of documentation I never again need to use. Sometimes it’s useful for learning about how others have done things.

                                                                                                                                                                                                                                  • Cthulhu_ 9 hours ago

                                                                                                                                                                                                                                    I'm the same, but thing is, 99% of the things I read about (on e.g. HN) are just... not important. I don't use them in my daily life.

                                                                                                                                                                                                                                    But I do have a very large knowledge base of small tidbits of information, so if I do need to ever go in-depth, I know where/how to find it.

                                                                                                                                                                                                                                    ...not that I do of course, I struggle with my long term attention span, I can't read documentation front to back and for twenty odd years now have just googled for the tidbit I needed and skipped the rest.

                                                                                                                                                                                                                                • bdelmas 2 days ago

                                                                                                                                                                                                                                  Yes in data science there is say: “there is no free lunch”. With ChatGPT and others becoming so prevalent even at PhD level people that will work hard and avoid to use these tools will be more and more seen as magicians. I already see this in coding where people can’t code medium to hard things and their intuitions like you said is wacky. It’s not the imposter syndrome anymore it’s people not being able to get their job done without some AI involved.

                                                                                                                                                                                                                                  What I do personally is for every subject that matters to me I take the time to first think about it. To explore ideas, concepts, etc… and answer questions that would ask to ChatGPT. Only once I get a good idea I start to ask chapgpt about it.

                                                                                                                                                                                                                                  • geye1234 2 days ago

                                                                                                                                                                                                                                    Interesting, thanks. Do you mean he would write the code out by hand on pen and paper? That has often struck me as a very good way of understanding things (granted, I don't code for my job).

                                                                                                                                                                                                                                    Similar thing in the historian's profession (which I also don't do for my job but have some knowledge of). Historians who spend all day immersed in physical archives tend, over time, to be great at synthesizing ideas and building up an intuition about their subject. But those who just Google for quotes and documents on whatever they want to write about tend to have more a static and crude view of their topic; they are less likely to consider things from different angles, or see how one things affects another, or see the same phenomenon arising in different ways; they are more likely to become monomaniacal (exaggerated word but it gets the point across) about their own thesis.

                                                                                                                                                                                                                                    • martingalex2 2 days ago

                                                                                                                                                                                                                                      Assuming this observation applies generally, give one point to the embodiment crowd.

                                                                                                                                                                                                                                    • r_singh 13 hours ago

                                                                                                                                                                                                                                      I doubt anyone who cannot get their hands dirty with code can gain any value from prompt engineering or using LLMs...

                                                                                                                                                                                                                                      Code with LLMs gets large pretty quickly and would have anyone who isn't practiced spinning their head pretty soon, don't you think?

                                                                                                                                                                                                                                      • zingababba 2 days ago

                                                                                                                                                                                                                                        I just decided to take a break from LLMs for coding assistance a couple days ago. Feels really good. It's funny how fast I am when I just understand the code myself instead of not understanding it and proooompting.

                                                                                                                                                                                                                                        • marcofloriano 2 days ago

                                                                                                                                                                                                                                          Same here, i just finished my subscription at ChatGPT, i feel free again.

                                                                                                                                                                                                                                        • WalterBright 2 days ago

                                                                                                                                                                                                                                          I learned this in college long before AI. If I didn't do the work to solve the homework problems, I didn't learn the material, no matter how much I imagined I understood it.

                                                                                                                                                                                                                                          • scarface_74 2 days ago

                                                                                                                                                                                                                                            I work in cloud consulting specializing in application development. But most of the time when an assignment is to produce code instead of leading a project or doing strategy assessments, it’s to turn around a quick proof of concept that requires a broad set of skills - infrastructure, “DevOps”, backend development and ETL type jobs where the goal is to teach the client or to get them to sign off on a larger project where we will need to bring in a team.

                                                                                                                                                                                                                                            For my last two projects, I didn’t write a single line of code by hand. But I refuse to use agents and I build up an implementation piece by piece via prompting to make sure I have the abstractions I want and reusable libraries.

                                                                                                                                                                                                                                            I take no joy in coding anymore and I’ve been doing it for fourty years. I like building systems and solving business problems.

                                                                                                                                                                                                                                            I’m not however disagreeing with you that LLMs will make your development skill atrophy, I’m seeing it in real time at 51. But between my customer facing work and supporting sales and cat herding, I don’t have time to sit around and write for loops and I’m damn sure not going to do side projects outside of work. Besides, companies aren’t willing to pay my company’s bill rates for me as a staff consultant to spend a lot of time coding.

                                                                                                                                                                                                                                            I hopefully can take solace in the fact that studies also show that learning a second language strengthens the brain and I’m learning Spanish and my wife and I plan to spend a couple of months in the winter every year in a Central American Spanish speaking country.

                                                                                                                                                                                                                                            We have already done the digital nomad thing across the US for a year until late 2023 so we are experienced with it and spent a month in Mexico.

                                                                                                                                                                                                                                            • thisisit 2 days ago

                                                                                                                                                                                                                                              This seems like the age old discussion of how does new technology changes our lives and makes us "lazy" or "lack of learning".

                                                                                                                                                                                                                                              Before the advent of smartphones people needed to remember phone numbers of their loved ones and maybe do some small calculations on the fly. Now people sometimes don't even remember their own numbers and have it saved on their phones.

                                                                                                                                                                                                                                              Now some might want to debate how smartphones are different from LLMs and it is not the same. But we have to remember for better or worse LLM adoption has been fast and it has become consumer technology. That is the area being discussed in the article. People using it to write essays. And those who might be using the label of "prompt bros" might be missing the full picture. There are people, however small, being helped by LLMs as there were people helped by smartphones.

                                                                                                                                                                                                                                              This is by no means a defense for using LLMs for learning tasks. If you write code by yourself, you learn coding. If you write your essays yourself, you learn how to make a solid points.

                                                                                                                                                                                                                                              • marcofloriano 2 days ago

                                                                                                                                                                                                                                                It's not the same with LLMs. What the study finds out is actually much more serious. When you use a phone or a calculator, you don't lose cognitive faculties. But when you delegate the thinking process to an LLM, your Brain gets phisically changed, witch leads to a cognitive damage. It's a completely different league.

                                                                                                                                                                                                                                                • fragmede 2 days ago

                                                                                                                                                                                                                                                  > When you use a phone or a calculator, you don't lose cognitive faculties.

                                                                                                                                                                                                                                                  Of course you do. I used to be able to multiply two two-digit numbers in my head. Now, my brain freezes and I reach for a calculator.

                                                                                                                                                                                                                                              • crinkly 2 days ago

                                                                                                                                                                                                                                                Good to hear. I will add that not everyone who writes a paper expects anyone to read a paper with that level of diligence and it can lead to some interesting outcomes for the paper authors over time.

                                                                                                                                                                                                                                                Keep up the good work is all I can say!

                                                                                                                                                                                                                                                • giancarlostoro 2 days ago

                                                                                                                                                                                                                                                  > These days, I'm fairly senior and don't touch code much anymore but I find it really really instructive to get my hands dirty and struggle through new code and ideas. I think the "just tweak the prompts bro" people are missing out on learning.

                                                                                                                                                                                                                                                  If you just use prompts and don't actually read the output, and figure out why it worked, and why it works, you will never get better. But if you take the time to understand why it works, you will be better for it, and might not even bother asking next time.

                                                                                                                                                                                                                                                  I've said it before, but when I first started using Firefox w/ autocorrect in like 2005, I made it a point to learn to spell from it, so that over time I would make less typos. English is my second language, so its always been an uphill battle for me despite having a native American English accent. Autocorrect on Firefox helped me tremendously.

                                                                                                                                                                                                                                                  I can use LLMs to plunge into things I'm afraid of trying out due to impostor syndrome and get more done sooner and learn on the way there. I think the key thing is to use tools correctly.

                                                                                                                                                                                                                                                  AI is like the limitless drug to a degree, you have an insane fountain of knowledge at your fingertips, you just need to use it wisely and learn from it.

                                                                                                                                                                                                                                                  • amlib 2 days ago

                                                                                                                                                                                                                                                    If that held true, than reading lots of source code from random FOSS projects would make you into an amazing coder. But clearly that's not enough to internalize and learn all that knowledge, you need to experiment with it, see it running, debug it, write some, extend it, fix bugs and so on. Just reading it with very little context for the rest of the project is a recipe for mediocrity.

                                                                                                                                                                                                                                                    • marcofloriano 2 days ago

                                                                                                                                                                                                                                                      It's different to read and understand a prompt and actually produce the code. It's a different level of cognition.

                                                                                                                                                                                                                                                    • vonneumannstan 2 days ago

                                                                                                                                                                                                                                                      >I think the "just tweak the prompts bro" people are missing out on learning.

                                                                                                                                                                                                                                                      Alternatively they're just learning/building intuition for something else. The level of abstraction is moving upwards. I don't know why people don't seem to grok that the level of the current models is the floor, not the ceiling. Despite the naysayers like Gary Marcus, there is in fact no sign of scaling or progress slowing down at all on AI capabilities. So it might be that if there is any value in human labor left in the future it will be in being able to get AI models to do what you want correctly.

                                                                                                                                                                                                                                                      • Brian_K_White 2 days ago

                                                                                                                                                                                                                                                        Wishful, self-serving, and beside the point. The primary argument here is not about the capability of the ai.

                                                                                                                                                                                                                                                        I think the same effect has been around forever in the form of every boss/manager/ceo/rando-divorcee-or-child-with-money using employees to do their thinking as a current information-handling worker or student using an ai to do their thinking.

                                                                                                                                                                                                                                                        • vonneumannstan 2 days ago

                                                                                                                                                                                                                                                          >Wishful, self-serving, and beside the point. The primary argument here is not about the capability of the ai.

                                                                                                                                                                                                                                                          "Alternatively they're just learning/building intuition for something else."

                                                                                                                                                                                                                                                          Reading comprehension is hard.

                                                                                                                                                                                                                                                        • benterix 2 days ago

                                                                                                                                                                                                                                                          That would be true if several conditions were fulfilled, starting with LLMs being actually to do their tasks properly, which they still very much struggle with which basically defeats the premise of moving up an abstraction layer if you have to constantly check and correct the lower layer.

                                                                                                                                                                                                                                                          • lazide 2 days ago

                                                                                                                                                                                                                                                            I remember this exact discussion (and exact situation) with WYSIWYG UI design tools.

                                                                                                                                                                                                                                                            They were still useful, and did solve a significant portion of user problems.

                                                                                                                                                                                                                                                            They also created even more problems, and no one really went out of work long term because of them.

                                                                                                                                                                                                                                                            • asveikau 2 days ago

                                                                                                                                                                                                                                                              This reads to me as extremely defensive.

                                                                                                                                                                                                                                                              • vonneumannstan 2 days ago

                                                                                                                                                                                                                                                                It's not but ok. Just responding to another version of "This generation is screwed" that has been happening literally since Socrates.

                                                                                                                                                                                                                                                                • jplusequalt 2 days ago

                                                                                                                                                                                                                                                                  There has been a growing amount of evidence for years that modern technology is not without it's side effects (mental health issues due to social media use, destruction of attention spans among the youth due to cell phone use, erosion of societal discourse and straight up political manipulation, and now we're seeing impacts to cognitive ability from LLMs).

                                                                                                                                                                                                                                                                  • thegrim33 2 days ago

                                                                                                                                                                                                                                                                    So .. people in the past were supposedly wrong about the next generation being "screwed", and therefore we all know that makes it completely impossible for any new generation at any point in history to ever be in any way worse or more screwed than previous generations. Because some people in the past were supposedly incorrect with similar assertions.

                                                                                                                                                                                                                                                                • jimkri 2 days ago

                                                                                                                                                                                                                                                                  I don't think Gary Marcus is necessarily a naysayer; I take it that he is trying to get people to be mindful of the current AI tooling and its capabilities, and that there is more to do before we say it is what it is being marketed as. Like, GPT5 seems to be an additional feature layer of game theory examples. Check LinkedIn for how people think it behaves, and you can see patterns. But they market it as much more.

                                                                                                                                                                                                                                                                  • vonneumannstan 2 days ago

                                                                                                                                                                                                                                                                    >I don't think Gary Marcus is necessarily a naysayer

                                                                                                                                                                                                                                                                    Oh come on. He is by far the most well known AI poo-poo'er and it's not even close. He built his entire brand on it once he realized his own research was totally irrelevant.

                                                                                                                                                                                                                                                                  • KoolKat23 2 days ago

                                                                                                                                                                                                                                                                    Agree with this.

                                                                                                                                                                                                                                                                    I mean the guy assembling a thingymajig in the factory, after a few years, can put it together with his hands 10x faster than the actual thingymajig designer. He'll tell you apply some more glue here and less glue there (it's probably slightly better, but immaterial really). However, he probably couldn't tell you what the fault tolerance of the item is, the designer can do that. We still outsource manufacturing to the guy in the factory regardless.

                                                                                                                                                                                                                                                                    We just have to get better at identifying risks with using the LLMs doing the grunt work and get better in mitigating them. As you say, abstracted.

                                                                                                                                                                                                                                                                    • codyb 2 days ago

                                                                                                                                                                                                                                                                      Really? No signs of slowing down?

                                                                                                                                                                                                                                                                      A year or two ago when LLMs popped on the scene my coworkers would say "Look at how great this is, I can generate test cases".

                                                                                                                                                                                                                                                                      Now my coworkers are saying "I can still generate test cases! And if I'm _really pacificcccc_, I can get it to generate small functions too!".

                                                                                                                                                                                                                                                                      It seems to have slowed down considerably, but maybe that's just me.

                                                                                                                                                                                                                                                                      • lazide 2 days ago

                                                                                                                                                                                                                                                                        At the beginning, it’s easy to extrapolate ‘magic’ to ‘can do everything’.

                                                                                                                                                                                                                                                                        Eventually, it stops being magic and the thinking changes - and we start to see the pros and cons, and see the gaps.

                                                                                                                                                                                                                                                                        A lot of people are still in the ‘magic’ phase.

                                                                                                                                                                                                                                                                        • vonneumannstan 2 days ago

                                                                                                                                                                                                                                                                          Yeah NGL if you can't get a model that is top 1% in Competitive Coding and Gold level medal IMO tier to do anything useful thats just an indictment on your skill level with them.

                                                                                                                                                                                                                                                                          • tuesdaynight 2 days ago

                                                                                                                                                                                                                                                                            Sorry for the bluntness, but you sound like you have a lot of opinions about LLM performance for someone who says that doesn't use them. It's okay if you are against them, but if have used them 3 years ago, you have no idea if there were improvements or not.

                                                                                                                                                                                                                                                                            • Jensson 2 days ago

                                                                                                                                                                                                                                                                              You can see what people built with LLM 3 years ago and what they build with LLM today and compare the two.

                                                                                                                                                                                                                                                                              That is a very natural and efficient way to do it, and also more reliable than using your own experience since you are just a single data point with feelings.

                                                                                                                                                                                                                                                                              You don't have to drive a car to see where cars were 20 years ago, see where cars are today, and say: "it doesn't look like cars will start flying anytime soon".

                                                                                                                                                                                                                                                                              • tuesdaynight a day ago

                                                                                                                                                                                                                                                                                Fair, but what about saying that cars didn't improve in 20 years (the last time you drove one) because they are still not flying?

                                                                                                                                                                                                                                                                              • Peritract 2 days ago

                                                                                                                                                                                                                                                                                > you sound like you have a lot of opinions about LLM performance for someone who says that doesn't use them

                                                                                                                                                                                                                                                                                It's not reasonable to treat only opinions that you agree with as valid.

                                                                                                                                                                                                                                                                                Some people don't use LLMs because they are familiar with them.

                                                                                                                                                                                                                                                                                • tuesdaynight a day ago

                                                                                                                                                                                                                                                                                  My point is that this person IS NOT familiar with them, while feeling confident enough to say that these tools didn't improve with time. I'm not saying that their opinions are invalid, just highlighting the lacking of experience with the current state of these AI coding agents.

                                                                                                                                                                                                                                                                                • vonneumannstan 2 days ago

                                                                                                                                                                                                                                                                                  "It can't do 9.9-9.11 or count the number of r's in strawberry!"

                                                                                                                                                                                                                                                                                  lol

                                                                                                                                                                                                                                                                                  • Nevermark 2 days ago

                                                                                                                                                                                                                                                                                    Since models are given tokens, not letters, to process, the famous issues with counting letters is not indicative of incompetence. They are just sub-sensory for the model.

                                                                                                                                                                                                                                                                                    None of us can reliably count the e’s as someone talks to us, either.

                                                                                                                                                                                                                                                                                    • hatefulmoron 2 days ago

                                                                                                                                                                                                                                                                                      It does say something that the models simultaneously:

                                                                                                                                                                                                                                                                                      a) "know" that they're not able to do it for the reason you've outlined (as in, you can ask about the limitations of LLMs for counting letters in words)

                                                                                                                                                                                                                                                                                      b) still blindly engage with the query and get the wrong answer, with no disclaimer or commentary.

                                                                                                                                                                                                                                                                                      If you asked me how many atoms there are in a chair, I wouldn't just give you a large natural number with no commentary.

                                                                                                                                                                                                                                                                                      • Nevermark a day ago

                                                                                                                                                                                                                                                                                        That is interesting.

                                                                                                                                                                                                                                                                                        A factor might be that they are trained to behave like people who can see letters.

                                                                                                                                                                                                                                                                                        During training they have no ability to not comply, and during inference they have no ability to choose to operate differently than during training.

                                                                                                                                                                                                                                                                                        A pre-prompt or co-prompt that requested they only answer questions about sub-token information if they believed they actually had reason to know the answer, would be a better test.

                                                                                                                                                                                                                                                                                        • hatefulmoron 9 hours ago

                                                                                                                                                                                                                                                                                          Your prompting suggestion would certainly make them much better at this task, I would think.

                                                                                                                                                                                                                                                                                          I think it just points to the fact that LLMs have no "sense of self". They have no real knowledge or understanding of what they know or what they don't know. LLMs will not even reliably play the character of a machine assistant: run them long enough and they will play the character of a human being with a physical body[0]. All this points to the fact that "Claude the LLM" is just the mask that it will produce tokens using at first.

                                                                                                                                                                                                                                                                                          The "count the number of 'r's in strawberry" test seems to just be the easiest/fastest way to watch the mask slip. Just like that, they're mindlessly acting like a human.

                                                                                                                                                                                                                                                                                          [0]: https://www.anthropic.com/research/project-vend-1

                                                                                                                                                                                                                                                                              • fatata123 2 days ago

                                                                                                                                                                                                                                                                                LLMs are plateauing, and you’re in denial.

                                                                                                                                                                                                                                                                                • vonneumannstan 2 days ago

                                                                                                                                                                                                                                                                                  Show me one metric they are plateauing on.

                                                                                                                                                                                                                                                                            • tomrod 2 days ago

                                                                                                                                                                                                                                                                              A few things to note.

                                                                                                                                                                                                                                                                              1. This is arxiv - before publication or peer review. Grain of salt.[0]

                                                                                                                                                                                                                                                                              2. 18 participants per cohort

                                                                                                                                                                                                                                                                              3. 54 participants total

                                                                                                                                                                                                                                                                              Given the low N and the likelihood that this is drawn from 18-22 year olds attending MIT, one should expect an uphill battle for replication and for generalizability.

                                                                                                                                                                                                                                                                              Further, they are brain scanning during the experiment, which is an uncomfortable/out-of-the-norm experience, and the object of their study is easy to infer if not directly known by the population (the person being studied using LLM, search tools, or no tools).

                                                                                                                                                                                                                                                                              > We thus present a study which explores the cognitive cost of using an LLM while performing the task of writing an essay. We chose essay writing as it is a cognitively complex task that engages multiple mental processes while being used as a common tool in schools and in standardized tests of a student's skills. Essay writing places significant demands on working memory, requiring simultaneous management of multiple cognitive processes. A person writing an essay must juggle both macro-level tasks (organizing ideas, structuring arguments), and micro-level tasks (word choice, grammar, syntax). In order to evaluate cognitive engagement and cognitive load as well as to better understand the brain activations when performing a task of essay writing, we used Electroencephalography (EEG) to measure brain signals of the participants. In addition to using an LLM, we also want to understand and compare the brain activations when performing the same task using classic Internet search and when no tools (neither LLM nor search) are available to the user.

                                                                                                                                                                                                                                                                              [0] https://arxiv.org/pdf/2506.08872

                                                                                                                                                                                                                                                                              • i_am_proteus 2 days ago

                                                                                                                                                                                                                                                                                >These 54 participants were between the ages of 18 to 39 years old (age M = 22.9, SD = 1.69) and all recruited from the following 5 universities in greater Boston area: MIT (14F, 5M), Wellesley (18F), Harvard (1N/A, 7M, 2 Non-Binary), Tufts (5M), and Northeastern (2M) (Figure 3). 35 participants reported pursuing undergraduate studies and 14 postgraduate studies. 6 participants either finished their studies with MSc or PhD degrees, and were currently working at the universities as post-docs (2), research scientists (2), software engineers (2)

                                                                                                                                                                                                                                                                                I would describe the study size and composition as a limitation, and a reason to pursue a larger and more diverse study for confirmation (or lack thereof), rather than a reason to expect an "uphill battle" for replication and so forth.

                                                                                                                                                                                                                                                                                • tomrod 2 days ago

                                                                                                                                                                                                                                                                                  > I would describe the study size and composition as a limitation, and a reason to pursue a larger and more diverse study for confirmation, rather than a reason to expect an "uphill battle" for replication and so forth.

                                                                                                                                                                                                                                                                                  Maybe. I believe we both agree it is a critical gap in the research as-is, but whether it is a neutral item or an albatross is an open question. Much of psychology and neuroscience research doesn't replicate, often because of the limited sample size / composition as well as unrealistic experimental design. Your approach of deepening and broadening the demographics would attack generalizability, but not necessarily replication.

                                                                                                                                                                                                                                                                                  My prior puts this on an uphill battle.

                                                                                                                                                                                                                                                                                  • genewitch 2 days ago

                                                                                                                                                                                                                                                                                    do you feel this way about every study with N~=54? For instance the GLP-1 brain cancer one?

                                                                                                                                                                                                                                                                                    • tomrod 2 days ago

                                                                                                                                                                                                                                                                                      You'll need to specify the study, I see several candidates in my search, several that are quite older.

                                                                                                                                                                                                                                                                                      Generally, yes, low N is unequivocally worse than high N in supporting population-level claims, all else equal. With fewer participants or observations, a study has lower statistical power, meaning it is less able to detect true effects when they exist. This increases the likelihood of both Type II errors (failing to detect a real effect) and unstable effect size estimates. Small samples also tend to produce results that are more vulnerable to random variation, making findings harder to replicate and less generalizable to broader populations.

                                                                                                                                                                                                                                                                                      In contrast, high-N studies reduce sampling error, provide more precise estimates, and allow for more robust conclusions that are likely to hold across different contexts. This is why, in professional and academic settings, high-N studies are generally considered more credible and influential.

                                                                                                                                                                                                                                                                                      In summary, you really need a large effect size for low-N studies to be high quality.

                                                                                                                                                                                                                                                                                      • sarchertech 2 days ago

                                                                                                                                                                                                                                                                                        The need for a large sample size is dependent on effect size.

                                                                                                                                                                                                                                                                                        The study showed that 0 of the AI users could recall a quote correctly while more than 50% of the non AI users could.

                                                                                                                                                                                                                                                                                        A sample of 54 is far, far larger than is necessary to say that an effect that large is statistically significant.

                                                                                                                                                                                                                                                                                        There could be other flaws, but given the effect size you certainly cannot say this study was underpowered.

                                                                                                                                                                                                                                                                                        • tomrod 2 days ago

                                                                                                                                                                                                                                                                                          You would need the following cohort size per alpha level (currently 18) at a power level of 80% with an effect size of 50%:

                                                                                                                                                                                                                                                                                          0.05: 11 people per cohort

                                                                                                                                                                                                                                                                                          0.01: 16 people per cohort

                                                                                                                                                                                                                                                                                          0.001: 48 people per cohort

                                                                                                                                                                                                                                                                                          So they do clear the effect size bar for that particular finding at the 99% level, though not quite the 99.9% level. Further, selection effects matter -- are there any school-cohort effects? Is there a student bias (i.e. would a working person at the same age, or someone from a difficult culture or background see the same effect?). Was the control and test truly random? etc. -- all of which would need a larger N to overcome.

                                                                                                                                                                                                                                                                                          So for students from the handful of colleges they surveyed, they identified the effect, but again, it's not bulletproof yet.

                                                                                                                                                                                                                                                                                          • sarchertech 2 days ago

                                                                                                                                                                                                                                                                                            With a greater than 99% probability that this is a real effect, i wouldn’t expect this to be difficult to reproduce.

                                                                                                                                                                                                                                                                                            But it turns out I misread the paper. It was actually an 80% effect size so greater than 99.9% chance of being a real effect.

                                                                                                                                                                                                                                                                                            Of course it could be the case that there is something different about young college students that makes them react very; very differently to LLM usage, but I wouldn’t bet on it.

                                                                                                                                                                                                                                                                                  • hedora 2 days ago

                                                                                                                                                                                                                                                                                    The experimental setup is hopelessly flawed. It assumes that people’s tasks will remain unchanged in the presence of an LLM.

                                                                                                                                                                                                                                                                                    If the computer writes the essay, then the human that’s responsible for producing good essays is going to pick up new (probably broader) skills really fast.

                                                                                                                                                                                                                                                                                    • efnx 2 days ago

                                                                                                                                                                                                                                                                                      Sounds like a hypothesis! You should do a study on that.

                                                                                                                                                                                                                                                                                    • stackskipton 2 days ago

                                                                                                                                                                                                                                                                                      I'd love to see much more diverse selection of schools. All of these schools are extremely selective so you are looking at extremely selective slice of the population.

                                                                                                                                                                                                                                                                                      • sarchertech 2 days ago

                                                                                                                                                                                                                                                                                        Is your hypothesis that very smart people are much, much less likely to be able to remember quotes from essays they wrote with LLM assistance than dumber people.

                                                                                                                                                                                                                                                                                        I wouldn’t bet on that being the case.

                                                                                                                                                                                                                                                                                        • stackskipton a day ago

                                                                                                                                                                                                                                                                                          No, my hypothesis is this is happening to people at very selective school, the damage it's doing at less selective schools is much much greater.

                                                                                                                                                                                                                                                                                      • jdietrich 2 days ago

                                                                                                                                                                                                                                                                                        Most studies don't replicate. Unless a study is exceptionally large and rigorous, your expectation should be that it won't replicate.

                                                                                                                                                                                                                                                                                        • sarchertech 2 days ago

                                                                                                                                                                                                                                                                                          That isn’t correct it has to do with the likelihood that the study produced an effect that was actually just random chance. Both the sample size and the effect size are equally important.

                                                                                                                                                                                                                                                                                          This study showed an enormous effect size for some effects, so large that there is a 99.9% chance that it’s a real effect.

                                                                                                                                                                                                                                                                                      • mnky9800n 2 days ago

                                                                                                                                                                                                                                                                                        I feel like saying papers pre peer review should be taken with a grain of salt should be stopped. Peer review is not some idealistic scientific endeavour it often leads to bullshit comments, slows down release, is free work for companies that have massive profit margins, etc. From my experience publishing 30+ papers I have received as many bad or useless comments as I have good ones. We should at least default to open peer review and editorial communication.

                                                                                                                                                                                                                                                                                        Science should become a marketplace of ideas. Your other criticisms are completely valid. Those should be what’s front and center. And I agree with you. The conclusions of the paper are premature and designed to grab headlines and get citations. Might as well be posting “first post” on slashdot. IMO we should not see the current standard of peer review as anything other than anachronistic.

                                                                                                                                                                                                                                                                                        • chaps 2 days ago

                                                                                                                                                                                                                                                                                          Please no. Remember that room temperature superconductor nonsense that went on for way too long? Let's please collectively try to avoid that..

                                                                                                                                                                                                                                                                                          • physarum_salad 2 days ago

                                                                                                                                                                                                                                                                                            That paper was debunked as a result of the open peer review enabled by preprints! Its astonishing how many people miss that and assume that closed peer review even performs that function well in the first place. For the absolute top journals or those with really motivated editors closed peer review is good. However, often it's worse...way worse (i.e. reams of correct seeming and surface level research without proper methods or review of protocols).

                                                                                                                                                                                                                                                                                            The only advantage to closed peer review is it saves slight scientific embarrassment. However, this is a natural part of taking risks ofc and risky science is great.

                                                                                                                                                                                                                                                                                            P.s. in this case I really don't like the paper or methods. However, open peer review is good for science.

                                                                                                                                                                                                                                                                                            • ajmurmann 2 days ago

                                                                                                                                                                                                                                                                                              To your point the paper AFAIK wasn't debunked because someone read it carefully but because people tried to reproduce it. Peer reviews don't reproduce. I think we'd be better off with fewer peer reviews and more time spent actually reproducing results. That's why we had a while crisis named after that

                                                                                                                                                                                                                                                                                              • jcranmer 2 days ago

                                                                                                                                                                                                                                                                                                > To your point the paper AFAIK wasn't debunked because someone read it carefully but because people tried to reproduce it.

                                                                                                                                                                                                                                                                                                Actually, from my recollection, it was debunked pretty quickly by people who read the paper because the paper was hot garbage. I saw someone point out that its graph of resistivity showed higher resistance than copper wire. It was no better than any of the other claimed room-temperature semiconductor papers that came out that year; it merely managed to catch virality on social media and therefore drove people to attempt to reproduce it.

                                                                                                                                                                                                                                                                                              • chaps 2 days ago

                                                                                                                                                                                                                                                                                                To be clear, I'm not saying that peer review is bad!! Quite the opposite.

                                                                                                                                                                                                                                                                                                • physarum_salad 2 days ago

                                                                                                                                                                                                                                                                                                  Yes ofc! I guess the major distinction is closed versus open peer review. Having observed some abuses of the former I am inclined to the latter. Although if editors are good maybe it's not such a big difference. The superconducting stuff was more of a saga rather than a reasonable process of peer review too haha.

                                                                                                                                                                                                                                                                                              • mwigdahl 2 days ago

                                                                                                                                                                                                                                                                                                And cold fusion. A friend's father (a chemistry professor) back in the early 90s wasted a bunch of time trying variants on Pons and Fleischmann looking to unlock tabletop fusion.

                                                                                                                                                                                                                                                                                              • tomrod 2 days ago

                                                                                                                                                                                                                                                                                                > I feel like saying papers pre peer review should be taken with a grain of salt should be stopped.

                                                                                                                                                                                                                                                                                                Absolutely not. I am an advocate for peer review, warts and all, and find that it has significant value. From a personal perspective, peer review has improved or shot down 100% of the papers that I have worked on -- which to me indicates its value to ensure good ideas with merit make it through. Papers I've reviewed are similarly improved -- no one knows everything and its helpful to have others with knowledge add their voice, even when the reviewers also add cranky items.[0] I would grant that it isn't a perfect process (some reviewers, editors are bad, some steal ideas) -- but that is why the marketplace of ideas exists across journals.

                                                                                                                                                                                                                                                                                                > Science should become a marketplace of ideas.

                                                                                                                                                                                                                                                                                                This already happens. The scholarly sphere is the savanna when it comes to resources -- it looks verdant and green but it is highly resource constrained. A shitty idea will get ripped apart unless it comes from an elephant -- and even then it can be torn to shreds.

                                                                                                                                                                                                                                                                                                That it happens behind paywalls is a huge problem, and the incentive structures need to be changed for that. But unless we want blatant charlatanism running rampant, you want quality checks.

                                                                                                                                                                                                                                                                                                [0] https://x.com/JustinWolfers/status/591280547898462209?lang=e... if a car were a manuscript

                                                                                                                                                                                                                                                                                                • srkirk a day ago

                                                                                                                                                                                                                                                                                                  What happens if (a) the scholarly sphere is continually expanding and (b) no researcher has time to be ripping apart anything? That also suggests (c) Researchers delegate reviewing duties to LLMs.

                                                                                                                                                                                                                                                                                                • stonemetal12 2 days ago

                                                                                                                                                                                                                                                                                                  Rather given the reproducibility crisis, how much salt does peer review nock off that grain? How often does peer review catch fraud or just bad science?

                                                                                                                                                                                                                                                                                                  • Bender 2 days ago

                                                                                                                                                                                                                                                                                                    I would also add, how often are peer reviews the same group of buddy-bro back-scratchers that know if they help that person with a positive peer review that person will return the favor. How many peer reviewers actually reproduce the results? How many peer reviewers would approve a paper if their credentials were on the line?

                                                                                                                                                                                                                                                                                                    Ironically, I am waiting for AI to start automating the process of teasing apart obvious pencil whipping, back scratching, buddy-bro behavior. Some believe its in the 1% range of falsified papers and pencil whipped reviews. I expect it to be significantly higher based on reading NIH papers for a long time in the attempt to actually learn things. I've reported the obvious shenanigans and sometimes papers are taken down but there are so many bad incentives in this process I predict it will only get worse.

                                                                                                                                                                                                                                                                                                    • genewitch 2 days ago

                                                                                                                                                                                                                                                                                                      who says it's "1%"? i'd reckon it's closer to 50% than 1%; that could mean 27%, it could mean 40%. I always have this at the back of my mind when i say something, and someone rejects it by citing a paper (or two). I doubt they even read the paper they're telling me to read as proof i am wrong, to start with. And then the "what are the chances this is repro?" itches a bit.

                                                                                                                                                                                                                                                                                                      This also ignores the fact that you can find a paper to support nearly everything if one is willing to link people "correlative" studies.

                                                                                                                                                                                                                                                                                                  • srkirk 2 days ago

                                                                                                                                                                                                                                                                                                    I believe LLMs have the potential to (for good or ill, depending on your view) destroy academic journals.

                                                                                                                                                                                                                                                                                                    The scenario I am thinking of is academic A submitting a manuscript to an academic journal, which gets passed on by the journal editor to a number of reviewers, one of whom is academic B. B has a lot on their plate at the moment, but sees a way to quickly dispose of the reviewing task, thus maintaining a possibly illusory 'good standing' in the journal's eyes, by simply throwing the manuscript to an LLM to review. There are (at least) two negative scenarios here: 1. The paper contains embedded (think white text on a white background) instructions left by academic A to any LLM reading the manuscript to view it in a positive light, regardless of how well the described work has been conducted. This has already happened IRL, by the way. 2. Academic A didn't embed LLM instructions, but receives the review report, which show clear signs that the reviewer either didn't understand the paper, gave unspecific comments, highlighted only typos or simply used phrasing that seems artifically-generated. A now feels aggrieved that their paper was not given the attention and consideration it deserved by an academic peer and now has a negative opinion of the journal for (seemingly) allowing the paper to be LLM-reviewed. And just as journals will have great difficulty filtering for LLM-generated manuscripts, it will also find it very difficult to filter for LLM-generated reviewers reports.

                                                                                                                                                                                                                                                                                                    Granted, scenario 2 already happens with only humans in the loop (the dreaded 'Reviewer 2' academic meme). But LLMs can only make this much much worse.

                                                                                                                                                                                                                                                                                                    Both scenarios destroy trust in the whole idea of peer-reviewed science journals.

                                                                                                                                                                                                                                                                                                    • perrygeo 2 days ago

                                                                                                                                                                                                                                                                                                      There's two questions at play. First, does the research pass the most rigorous criteria to become widely-accepted scientific fact? Second, does the research present enough evidence to tip your priors and change your personal decisions?

                                                                                                                                                                                                                                                                                                      So it's possible to be both skeptical of how well these results generalize (and call for further research), but also heed the warning: AI usage does appear to change something fundamental about our congnitive processes, enough to give any reasonable person pause.

                                                                                                                                                                                                                                                                                                    • memco 2 days ago

                                                                                                                                                                                                                                                                                                      It’s also worth noting that this was specifically about the effects of ChatGPT on users’s ability to write essays: which means that if you don’t practice your writing skills, then your writing skills decline. This doesn’t seem to show that it is harmful just that it does not induce the same brain activity that is observed in other essay writing methods.

                                                                                                                                                                                                                                                                                                      Additionally, the original paper uses the term “cognitive debt“ not cognitive decline, which may have an important ramifications for interpretation and conclusions.

                                                                                                                                                                                                                                                                                                      I wouldn’t be surprised to see similar results in other similar types of studies, but it does feel a bit premature to broadly conclude that all LLM/AI use is harmful to your brain. In a less alarmist take: this could also be read to show that AI use effectively simplifies the essay writing process by reducing cognitive load, therefore making essays easier and more accessible to a broader audience but that would require a different study to see how well the participants scored on their work.

                                                                                                                                                                                                                                                                                                      • bjourne 2 days ago

                                                                                                                                                                                                                                                                                                        > In a less alarmist take: this could also be read to show that AI use effectively simplifies the essay writing process by reducing cognitive load, therefore making essays easier and more accessible to a broader audience but that would require a different study to see how well the participants scored on their work.

                                                                                                                                                                                                                                                                                                        In much the same way chess engines make competitive chess accessible to a broader audience. :)

                                                                                                                                                                                                                                                                                                        • sarchertech 2 days ago

                                                                                                                                                                                                                                                                                                          It also showed that people couldn’t successfully recall information about what they’d just written when they used LLM assistance.

                                                                                                                                                                                                                                                                                                          Writing is an important form of learning and this clearly shows LLM assisted writing doesn’t provide that benefit.

                                                                                                                                                                                                                                                                                                        • giancarlostoro 2 days ago

                                                                                                                                                                                                                                                                                                          The other thing to note is "AI" is being used in place of LLMs. AI is a lot of things, I would be surprised to find out that generating images, video and audio would lead to cognitive decline. What I think LLMs might lead to is intellectual laziness, why memorize or remember something if the LLM can remember it type of thing.

                                                                                                                                                                                                                                                                                                          • KoolKat23 2 days ago

                                                                                                                                                                                                                                                                                                            I'd say the framing is wrong. Do we call delivery drivers lazy because they take the highway rather than the backroads? Or because they drive the goods there rather than walk? They're missing out on all that traffic intersection experience.

                                                                                                                                                                                                                                                                                                            Perhaps the issue of cognitive decline comes from sitting there vegetating rather applying themselves during all that additional spare time.

                                                                                                                                                                                                                                                                                                            Although my experience has been perhaps different using LLM's, my mind still tires at work. I'm still having to think on the bigger questions, it's just less time spent on the grunt work.

                                                                                                                                                                                                                                                                                                            • jplusequalt 2 days ago

                                                                                                                                                                                                                                                                                                              >Perhaps the issue of cognitive decline comes from sitting there vegetating rather applying themselves during all that additional spare time.

                                                                                                                                                                                                                                                                                                              The push for these tools is to increase productivity. What spare time is there to be had if now you're expected to produce 2-3X the amount of code in the same time frame?

                                                                                                                                                                                                                                                                                                              Also, I don't know if you've gotten outside of the software/tech bubble, but most people already spend 90% of their free time glued to a screen. I'd wager the majority of critical thinking people experience on a day to day basis is at work. Now that we may be automating that away, I bet you'll see many people cease to think deeply at all!

                                                                                                                                                                                                                                                                                                            • mym1990 2 days ago

                                                                                                                                                                                                                                                                                                              I would argue that intellectual laziness can and will lead to cognitive decline as much as physical laziness can and will lead to muscle atrophy. It’s akin to using a maps app to get from point a to b but not ever remembering the route, even though someone has done it 100 times.

                                                                                                                                                                                                                                                                                                              I don’t know the percentage of people who are still critically thinking while using AI tools, but I can first hand see many students just copy pasting content to their school work.

                                                                                                                                                                                                                                                                                                              • giancarlostoro 2 days ago

                                                                                                                                                                                                                                                                                                                Fully agree, I think the cognitive decline is probably over time. Look at old, retired people, how they go from feeling like a teenager, to barely remembering anything as an example.

                                                                                                                                                                                                                                                                                                            • rawgabbit 2 days ago

                                                                                                                                                                                                                                                                                                              I skimmed the paper and I question the validity of the experiment.

                                                                                                                                                                                                                                                                                                              There was a “brain” group who did three sessions of essay writing and on the fourth session, they used ChatGPT. The paper’s authors said during the fourth session, the brain groups EEG was higher than the LLM groups EEG when they also used ChatGPT.

                                                                                                                                                                                                                                                                                                              I interpret this as the brain group did things the hard way and when they did things the easy way, their brains were still expecting the same cognitive load.

                                                                                                                                                                                                                                                                                                              But isn’t the point of writing an essay is the quality of the essay? The LLM supposedly brain damaged group still produced an essay for session 4 that was graded “high” by both AI and human judges but were faulted for “stood out less” in terms of distance in n-gram usage compared to the other groups? I think this making a mountain out of a very small mole hill.

                                                                                                                                                                                                                                                                                                              • MobiusHorizons a day ago

                                                                                                                                                                                                                                                                                                                > But isn’t the point of writing an essay is the quality of the essay

                                                                                                                                                                                                                                                                                                                Most of the things you write in an educational context are about learning, not about producing something of value. Productivity in a learning context is usually the wrong lens. The same thing is true IMO for learning on the job, where it is typically expected that productivity will initially be low while experience is low, but should increase over time.

                                                                                                                                                                                                                                                                                                                • rawgabbit a day ago

                                                                                                                                                                                                                                                                                                                  That may be true if this was measuring an English class. But the experiment was just writing essays, there were no other instruction other write an essay either with no tools, ChatGPT, or a search engine. That is the only variable was tool or without tool.

                                                                                                                                                                                                                                                                                                              • somenameforme 2 days ago

                                                                                                                                                                                                                                                                                                                In general I agree with you regarding the weakness of the paper, but not the skepticism towards its outcome.

                                                                                                                                                                                                                                                                                                                Our bodies naturally adjust to what we do. Do things and your body reinforces that enabling you do even more advanced versions of those things. Don't do things and your skill or muscle in such tends to atrophy over time. Asking LLMs to (as in this case) write an essay is always going to be orders of magnitude easier than actually writing an essay. And so it seems fairly self evident that using LLMs to write essays would gradually degrade your own ability to do so.

                                                                                                                                                                                                                                                                                                                I mean it's possible that this, for some reason, might not be true, but that would be quite surprising.

                                                                                                                                                                                                                                                                                                                • tomrod 2 days ago

                                                                                                                                                                                                                                                                                                                  Ever read books in the Bobiverse? They provide an pretty functional cognitive model for how human interfaces with tooling like AI will probably work (even though it is fiction) -- lower level actions are pushed into autonomous regions until a certain deviancy threshold is achieved. Much like breathing -- you don't typically think about breathing until it becomes a problem (choking, underwater, etc.) and then it very much hits the high level of the brain.

                                                                                                                                                                                                                                                                                                                  What is reported as cognitive decline in the paper might very well be cognitive decline. It could also be alternative routing focused on higher abstractions, which we interpret as cognitive decline because the effect is new.

                                                                                                                                                                                                                                                                                                                  I share your concern, for the record, that people become too attached to LLMs for generation of creative work. However, I will say it can absolutely be used to unblock and push more through. The quality versus quantity balance definitely needs consideration (which I think they are actually capturing vs. cognitive decline) -- the real question to me is whether an individual's production possibility frontier is increased (which means more value per person -- a win!), partially negative in impact (use with caution), or decreased overall (a major loss). Cognitive decline points to the latter.

                                                                                                                                                                                                                                                                                                                • falconroar a day ago

                                                                                                                                                                                                                                                                                                                  Essential context. So many variables here with very naive experimental procedure. Also "Cognitive Decline" is never mentioned in the paper.

                                                                                                                                                                                                                                                                                                                  An equally valid conclusion is "People are Lazier at Writing Essays When Provided with LLMs".

                                                                                                                                                                                                                                                                                                                  • ndkap 2 days ago

                                                                                                                                                                                                                                                                                                                    Why would people publish a research with low population size?

                                                                                                                                                                                                                                                                                                                    • sarchertech 2 days ago

                                                                                                                                                                                                                                                                                                                      Because some of the effect sizes were so large that the probability of the effect being real is greater than 99.9%.

                                                                                                                                                                                                                                                                                                                    • dmitrygr 2 days ago

                                                                                                                                                                                                                                                                                                                      I expect this to replicate trivially. You would too if you had interacted with anybody who started using LLMs more and more recently. You can literally watch the IQs drop like a rock. People who used to be lively debaters now need to ask Grok or ChatGPT before saying anything.

                                                                                                                                                                                                                                                                                                                      • IshKebab 2 days ago

                                                                                                                                                                                                                                                                                                                        Yeah my bullshit detector is going off even more than when I use ChatGPT...

                                                                                                                                                                                                                                                                                                                        4. This is clickbait research, so it's automatically less likely to be true.

                                                                                                                                                                                                                                                                                                                        5. They are touting obvious things as if they are surprising, like the fact that you're less likely to remember an essay that you got something else to write, or that the ChatGPT essays were verbose and superficial.

                                                                                                                                                                                                                                                                                                                        • dahart 2 days ago

                                                                                                                                                                                                                                                                                                                          This comment reminds me of the so called Dunning Kruger Effect. That paper had pretty close to the same participation size, and participants were pulled from a single school (Cornell). It also has major methodology problems, and has had an uphill battle for replication and generalizability, actually losing the battle in some cases. And yet, we have a famous term for it that people love to use, often and incorrectly, even when you take the paper at face value!

                                                                                                                                                                                                                                                                                                                          The problem is that a headline that people want to believe is a very powerful force that can override replication and sample size and methodology problems. AI rots your brain follows behind social media rots your brain, which came after video games rot your brain, which preceded TV rots your brain. I’m sure TV wasn’t even the first. There’s a long tradition of publicly worrying about machines making us stupider.

                                                                                                                                                                                                                                                                                                                          • tomrod 2 days ago

                                                                                                                                                                                                                                                                                                                            > The problem is that a headline that people want to believe is a very powerful force that can override replication and sample size and methodology problems. AI rots your brain follows behind social media rots your brain, which came after video games rot your brain, which preceded TV rots your brain. I’m sure TV wasn’t even the first. There’s a long tradition of publicly worrying about machines making us stupider.

                                                                                                                                                                                                                                                                                                                            Your comment reminded me of this (possibly spurious) quote:

                                                                                                                                                                                                                                                                                                                            >> An Assyrian clay tablet dating to around 2800 B.C. bears the inscription: “Our Earth is degenerate in these later days; there are signs that the world is speedily coming to an end; bribery and corruption are common; children no longer obey their parents; every man wants to write a book and the end of the world is evidently approaching.”[0]

                                                                                                                                                                                                                                                                                                                            Same as it ever was. [1]

                                                                                                                                                                                                                                                                                                                            [0] https://quoteinvestigator.com/2012/10/22/world-end/

                                                                                                                                                                                                                                                                                                                            [1] https://www.youtube.com/watch?v=5IsSpAOD6K8

                                                                                                                                                                                                                                                                                                                            • genewitch 2 days ago

                                                                                                                                                                                                                                                                                                                              there's newspaper clippings with the same headlines / lead-in dating back to newspapers, so this has been going on for at least 5 generations, lends a bit of credence.

                                                                                                                                                                                                                                                                                                                              People have also been complaining about politicians for hundreds of years, and the ruling class for millennia, as well. and the first written math mistake was about beer feedstock, so maybe it's all correlated.

                                                                                                                                                                                                                                                                                                                              • tomrod 2 days ago

                                                                                                                                                                                                                                                                                                                                Fun times, being so close to history :)

                                                                                                                                                                                                                                                                                                                            • hamburga 2 days ago

                                                                                                                                                                                                                                                                                                                              Socrates famously complained about literacy making us stupider in Phaedrus.

                                                                                                                                                                                                                                                                                                                              Which I believe still does have a large grain of truth.

                                                                                                                                                                                                                                                                                                                              These things can make us simultaneously dumber and smarter, depending on usage.

                                                                                                                                                                                                                                                                                                                              • imchillyb 2 days ago

                                                                                                                                                                                                                                                                                                                                Socrates was correct. In his day memory was treasured. Memory was how ideas were linked, how quotes were attained, and how arguments were made.

                                                                                                                                                                                                                                                                                                                                Writing leads to the rapid decline in memory function. Brains are lazy.

                                                                                                                                                                                                                                                                                                                                Ever travel to a new place and the brain pipes up with: ‘this place is just like ___’? That the brain’s laziness showing itself. The brain says: ‘okay I solved that, go back to rest.’ The observation is never true; never accurate.

                                                                                                                                                                                                                                                                                                                                Pattern recognition saves us time and enables us too survive situations that aren’t readily survivable. Pattern recognition leads to short cuts that do humanity a disservice.

                                                                                                                                                                                                                                                                                                                                Socrates recognized these traits in our brains and attempted to warn humanity of the damage these shortcuts do to our reasoning and comprehension skills. In Socrates day it was not unheard of for a person to memorize their entire family tree, or memorize an entire treaty and quote from it.

                                                                                                                                                                                                                                                                                                                                Humanity has -overwhelmingly- lost these abilities. We rely upon our external memories. We forget names. We forget important dates. We forget times and seasons. We forget what we were just doing!!!

                                                                                                                                                                                                                                                                                                                                Socrates had the right of it. Writing makes humans stupid. Reduces our token limits. Reduces paging table sizes. Reduces overall conversation length.

                                                                                                                                                                                                                                                                                                                                We may have more learning now, but what have we given up to attain it?

                                                                                                                                                                                                                                                                                                                                • dahart a day ago

                                                                                                                                                                                                                                                                                                                                  This is an interesting argument. I’m not convinced but I’m open to hearing more. Don’t we only know about Socrates because he was written about? What evidence do we have that writing reduces memory at all? Don’t studies of students show taking notes increases retention? Anecdotally, the writers I know tend to demonstrate the opposite of what you’re saying, they seem to read, think, converse, and remember more than people who aren’t writing regularly. What exactly have we given up to attain more learning? We still have people who can memorize long things today, is it any fewer than in Socrates’ day? How do we know? Do you subscribe to the idea that the printing press accelerated collective memory, which is far more important for technology and industrial development and general information keeping than personal memory? Most people in Socrates’ day, and before, and since, all forgot their family trees, but thankfully some people wrote them down so we still have some of it. Future generations won’t have the gaps in history we have today.

                                                                                                                                                                                                                                                                                                                              • boringg 2 days ago

                                                                                                                                                                                                                                                                                                                                I mean there are clear problems with heavy exposure to TV, video games and I have no doubt that there are similar problems with heavy AI use. Any adult with children can clearly see the addiction qualities and behavioral fall out.

                                                                                                                                                                                                                                                                                                                                • dahart 2 days ago

                                                                                                                                                                                                                                                                                                                                  I agree with you there. I wasn’t really arguing that there aren’t behavior or addiction problems, however you inserted the word “heavy” and it’s doing some heavy lifting here, and you brought up addiction and behavior which aren’t really the same topic as cognitive decline. The posted article isn’t arguing that there’s a heavy use threashold, it’s arguing that all use of AI is brain damaging. Lots of historical hyperbole about games, TV, and social media did the same.

                                                                                                                                                                                                                                                                                                                                  One confounding problem with the argument that TV and video games made kids dumber is the Flynn Effect. https://en.wikipedia.org/wiki/Flynn_effect

                                                                                                                                                                                                                                                                                                                            • LocalPCGuy 2 days ago

                                                                                                                                                                                                                                                                                                                              This is a bad and sloppy regurgitation of a previous (and more original) source[1] and the headline and article explicitly ignore the paper authors' plea[2] to avoid using the paper to try to draw the exact conclusions this article saying the paper draws.

                                                                                                                                                                                                                                                                                                                              The comments (some, not all) are also a great example of how cognitive bias can cause folks to accept information without doing a lot of due diligence into the actual source material.

                                                                                                                                                                                                                                                                                                                              > Is it safe to say that LLMs are, in essence, making us "dumber"?

                                                                                                                                                                                                                                                                                                                              > No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "passivity", "trimming" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it

                                                                                                                                                                                                                                                                                                                              > Additional vocabulary to avoid using when talking about the paper

                                                                                                                                                                                                                                                                                                                              > In addition to the vocabulary from Question 1 in this FAQ - please avoid using "brain scans", "LLMs make you stop thinking", "impact negatively", "brain damage", "terrifying findings".

                                                                                                                                                                                                                                                                                                                              1. https://www.brainonllm.com/

                                                                                                                                                                                                                                                                                                                              2. https://www.brainonllm.com/faq

                                                                                                                                                                                                                                                                                                                              • causal 2 days ago

                                                                                                                                                                                                                                                                                                                                Yeah I feel like HN is being Reddit-ified with the amount of reposted clickbait that keeps making the front page :(

                                                                                                                                                                                                                                                                                                                                This study in particular has made the rounds several times as you said. The study measures impact of 18 people using ChatGPT just four times over four months. I'm sorry but there is no way that is controlling for noise.

                                                                                                                                                                                                                                                                                                                                I'm sympathetic to the idea that overusing AI causes atrophy but this is just clickbait for a topic we love to hate.

                                                                                                                                                                                                                                                                                                                                • Mentlo a day ago

                                                                                                                                                                                                                                                                                                                                  Ironically you’re now replicating the reddified response to this paper by attacking the sample size.

                                                                                                                                                                                                                                                                                                                                  The sample size is fine. It’s small, yes, but normal for psychological research which is hard to do at scale.

                                                                                                                                                                                                                                                                                                                                  And the difference between groups is so large that the noise would have to be at unheard levels to taint the finding.

                                                                                                                                                                                                                                                                                                                                  • LocalPCGuy 2 days ago

                                                                                                                                                                                                                                                                                                                                    Yup, I even found myself a bit hopeful that maybe it was a follow-up or new study and we'd get either more or at least different information. But that bit of hope is also an example of my bias/sympathy to that idea that it might be harmful.

                                                                                                                                                                                                                                                                                                                                    It should be ok to just say "we don't know yet, we're looking into that", but that isn't the world we live in.

                                                                                                                                                                                                                                                                                                                                    • tarsinge 2 days ago

                                                                                                                                                                                                                                                                                                                                      Ironically there should be another study of how not using AI is also leading to cognitive decline on Reddit. On programming subreddits people have lost all sense of engineering and have simply become religious about being against a tool.

                                                                                                                                                                                                                                                                                                                                      • GeoAtreides 2 days ago

                                                                                                                                                                                                                                                                                                                                        >I feel like HN is being Reddit-ified

                                                                                                                                                                                                                                                                                                                                        It's september and september never ends

                                                                                                                                                                                                                                                                                                                                      • NapGod 2 days ago

                                                                                                                                                                                                                                                                                                                                        yea it's clear no one is actually reading the paper. the study showed the group who used LLMs for the first three sessions then had to do session 4 without them had lower brain connectivity than was recorded for session 3 with all the groups showing some kind of increase from one session to the next. Importantly, this group's brain connectivity didn't reset to the session 1 levels, but somewhere in-between. They were still learning and getting better at the essay writing task. In session 4 they effectively had part of the brain network they were using for the task taken away, so obviously there's a dip in performance. None of this says anyone got dumber. The philosophical concept of the Extended Mind is key here.

                                                                                                                                                                                                                                                                                                                                        imo the most interesting result is that the brains of the group that had done sessions 1-3 without the search engine or LLM aids lit up like christmas trees in session 4 when they were given LLMs to use, and that's what the paper's conclusions really focus on.

                                                                                                                                                                                                                                                                                                                                        • marcofloriano 2 days ago

                                                                                                                                                                                                                                                                                                                                          > Is it safe to say that LLMs are, in essence, making us "dumber"?

                                                                                                                                                                                                                                                                                                                                          > No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "passivity", "trimming" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it

                                                                                                                                                                                                                                                                                                                                          Maybe it's not safe so far, but it has been my experience using chatGPT for eight months to code. My brain is getting slower and slower, and that study makes a hell of a sense to me.

                                                                                                                                                                                                                                                                                                                                          And i don't think that we will see new studies on this subject, because those in lead of society as a whole don't want negative press towards AI.

                                                                                                                                                                                                                                                                                                                                          • LocalPCGuy 2 days ago

                                                                                                                                                                                                                                                                                                                                            You are referencing your own personal experience, and while that is an entirely valid opinion for you to have personally about your usage, it's not possible to extrapolate that across an entire population of people. Whether or not you're doing that, part of the point I was making was how people who "think it makes sense" will often then not critically analyze something because it already agrees with their preconceived notion. Super common, I'm just calling it out cause we can all do better.

                                                                                                                                                                                                                                                                                                                                            All we can say right now is "we don't really know how it affects our brains", and we won't until we get some studies (which is what the underlying paper was calling for, more research).

                                                                                                                                                                                                                                                                                                                                            Personally I do think we'll get more studies, but the quality is the question for me - it's really hard to do a study right when by the time it's done, there's been 2 new generations of LLMs released making the study data potentially obsolete. So researchers are going to be tempted to go faster, use less people, be less rigid overall, which in turn may make for bad results.

                                                                                                                                                                                                                                                                                                                                        • TheAceOfHearts 2 days ago

                                                                                                                                                                                                                                                                                                                                          Personally, I don't think you should ever allow the LLM to write for you or to modify / update anything you're writing. You can use it to get feedback when editing, to explore an idea-space, and to find any topical gaps. But write everything yourself! It's just too easy to give in and slowly let the LLM take over your brain.

                                                                                                                                                                                                                                                                                                                                          This article is focused on essay writing, but I swear I've experienced cognitive decline when using AI tools a bit too much to help solve programming-related problems. When dealing with an unfamiliar programming ecosystem it feels so easy and magical to just keep copy / pasting error outputs until the problem is resolved. Previously solving the problem would've taken me longer but I would've also learned a lot more. Then again, LLMs also make it way easier to get started and feel like you're making significant progress, instead of getting stuck at the first hurdle. There's definitely a balance. It requires a lot of willpower to sit with a problem in order to try and work through it rather than praying to the LLM slot machine for an instant solution.

                                                                                                                                                                                                                                                                                                                                          • jbstack 2 days ago

                                                                                                                                                                                                                                                                                                                                            > I've experienced cognitive decline when using AI tools a bit too much to help solve programming-related problems. When dealing with an unfamiliar programming ecosystem it feels so easy and magical to just keep copy / pasting error outputs until the problem is resolved. Previously solving the problem would've taken me longer but I would've also learned a lot more.

                                                                                                                                                                                                                                                                                                                                            I've had the opposite experience, but my approach is different. I don't just copy/paste errors, accept the AI's answer when it works, and move on. I ask follow up questions to make sure I understand why the AI's answer works. For example, if it suggests running a particular command, I'll ask it to break down the command and all the flags and explain what each part is doing. Only when I'm satisfied that I can see why the suggestion solves the problem do I accept it and move on to the next thing.

                                                                                                                                                                                                                                                                                                                                            The tradeoff for me ends up being that I spend less time learning individual units of knowledge than if I had to figure things out entirely myself e.g. by reading the manual (which perhaps leads to less retention), but I learn a greater quantity of things because I can more rapidly move on to the next problem that needs solving.

                                                                                                                                                                                                                                                                                                                                            • mzajc 2 days ago

                                                                                                                                                                                                                                                                                                                                              > I ask follow up questions to make sure I understand why the AI's answer works.

                                                                                                                                                                                                                                                                                                                                              I've tried a similar approach and found it very prone to hallucination[0]. I tend to google things first and ask a LLM as fallback, so maybe it's not a fair comparison, but what do I need a LLM for if a search engine can answer my question.

                                                                                                                                                                                                                                                                                                                                              [0]: Just the other day I asked ChatGPT what a colonn (':') after systemd's ExecStart= means. The correct answer is that it inhibits variable expansion, but it kept giving me convincing yet incorrect answers.

                                                                                                                                                                                                                                                                                                                                              • jbstack 2 days ago

                                                                                                                                                                                                                                                                                                                                                It's a tradeoff. After using ChatGPT for a while you develop somewhat of an instinct for when it might be hallucinating, especially when you start probing it for the "why" part and you get a feel for whether its explanations make sense. Having at least some domain knowledge helps too - you're more at risk of being fooled by hallucinations if you are trying to get it to do something you know nothing about.

                                                                                                                                                                                                                                                                                                                                                While not foolproof, when you combine this with some basic fact-checking (e.g. quickly skim read a command's man page to make sure the explanation for each flag sounds right, or read the relevant paragraph from the manual) plus the fact that you see in practice whether the proposed solution fixes the problem, you can reach a reasonably high level of accuracy most of the time.

                                                                                                                                                                                                                                                                                                                                                Even with the risk of hallucinations it's still a great time saver because you short-circuit the process of needing to work out which command is useful and reading the whole of the man page / manual until you understand which component parts do the job you want. It's not perfect but neither is Googling - that can lead to incorrect answers too.

                                                                                                                                                                                                                                                                                                                                                To give an example of my own, the other day I was building a custom Incus virtual machine image from scratch from an ISO. I wanted to be able to provision it with cloud-init (which comes configured by default in cloud-enabled stock Incus images). For some reason, even with cloud-init installed in the guest, the host's provisioning was being ignored. This is a rather obscure problem for which Googling was of little use because hardly anyone makes cloud-init enabled images from ISOs in Incus (or if they do, they don't write about it on the internet).

                                                                                                                                                                                                                                                                                                                                                At this point I could have done one of two things: (a) spend hours or days learning all about how cloud-init works and how Incus interacts with it until I eventually reached the point where I understood what the problem was; or (b) ask ChatGPT. I opted for the latter and quickly figured out the solution and why it worked, thus saving myself a bunch of pointless work.

                                                                                                                                                                                                                                                                                                                                                • majewsky 2 days ago

                                                                                                                                                                                                                                                                                                                                                  Does it work better when the AI is instructed to describe a method of answering the question, instead of answering the question directly?

                                                                                                                                                                                                                                                                                                                                                  For example, in this specific case, I am enough of a domain expert to know that this information is accessible by running `man systemd.service` and looking for the description of command line syntax (findable with grep for "ExecStart=", or, as I have now seen in preparing this answer, more directly with grep for "COMMAND LINES").

                                                                                                                                                                                                                                                                                                                                                  • mzajc 2 days ago

                                                                                                                                                                                                                                                                                                                                                    That's a much better option since the LLM is no longer the source of truth. Unfortunately, it only works in cases where the feature is properly documented, which isn't the case here.

                                                                                                                                                                                                                                                                                                                                                  • dpkirchner 2 days ago

                                                                                                                                                                                                                                                                                                                                                    Could you give an example of an ExecStart line that uses a colon? I haven't found any documentation for that while using Google and I don't have examples of it in my systemd unit files.

                                                                                                                                                                                                                                                                                                                                                    • mzajc 2 days ago

                                                                                                                                                                                                                                                                                                                                                      Yup, it's undocumented for some reason. I don't remember where I saw it used, but as an example

                                                                                                                                                                                                                                                                                                                                                        [Service]
                                                                                                                                                                                                                                                                                                                                                        ExecStart=/bin/echo $PATH
                                                                                                                                                                                                                                                                                                                                                      
                                                                                                                                                                                                                                                                                                                                                      Will log the environment variable, while

                                                                                                                                                                                                                                                                                                                                                        [Service]
                                                                                                                                                                                                                                                                                                                                                        ExecStart=:/bin/echo $PATH
                                                                                                                                                                                                                                                                                                                                                      
                                                                                                                                                                                                                                                                                                                                                      Will log literal $PATH.
                                                                                                                                                                                                                                                                                                                                                  • kjkjadksj 2 days ago

                                                                                                                                                                                                                                                                                                                                                    I think the school experience proves that doesn’t work. Reminds me of a teacher carefully breaking down the problem on the board and you nodding along when it is unfolding in front of you in a directed manner. The question is if you can do it yourself come the exam. If all you did to prepare is watch the teacher solve it, with no attempt to solve it from scratch yourself during practice, you will fail the exam.

                                                                                                                                                                                                                                                                                                                                                    • jbstack 2 days ago

                                                                                                                                                                                                                                                                                                                                                      That very much depends on the topic being studied. I've passed plenty of exams of different levels (school, university, professional qualifications) just by reading the textbook and memorising key facts. I'd agree with you if we are talking about something like maths.

                                                                                                                                                                                                                                                                                                                                                      Also, there's a huge difference between passively watching a teacher write an explanation on a board, and interactively quizzing the teacher (or in this case, LLM) in order to gain a deeper and personalised understanding.

                                                                                                                                                                                                                                                                                                                                                      • kjkjadksj a day ago

                                                                                                                                                                                                                                                                                                                                                        The issue is verifying the LLM is returning actual facts. By the time you’ve done that consulting sufficient reliable information, you don’t need the LLM.

                                                                                                                                                                                                                                                                                                                                                  • giancarlostoro 2 days ago

                                                                                                                                                                                                                                                                                                                                                    When Firefox added autocorrect, and I started using it, I made it a point to learn what it was telling me was correct, so I could write more accurately. I have since become drastically better at spelling, I still goof, I'm even worse when pronouncing words I've read but never heard. English is my second language mind you.

                                                                                                                                                                                                                                                                                                                                                    I think any developer worth their salt would use LLMs to learn quicker, and arrive to conclusions quicker. There's some programming problems I run into when working on a new project that I've run into before but cannot recall what my last solution was and it is frustrating, I could see how an LLM could help with such a resolution coming back quicker. Sometimes its 'first time setup' stuff that you have not had to do for like 5 years, so you forget, and maybe you wrote it down on a wiki, two jobs ago, but an LLM could help you remember.

                                                                                                                                                                                                                                                                                                                                                    I think we need to self-evaluate how we use LLMs so that they help us become better Software Engineers, not worse ones.

                                                                                                                                                                                                                                                                                                                                                    • lazide 2 days ago

                                                                                                                                                                                                                                                                                                                                                      I’d consider it similar to always using a GPS/Google Maps/Apple Maps to get somewhere without thinking about it first.

                                                                                                                                                                                                                                                                                                                                                      It’s really convenient. It also similarly rots the parts of the brain required for spatial reasoning and memory for a geographic area. It can also lead to brain rot with decision making.

                                                                                                                                                                                                                                                                                                                                                      Usually it’s good enough. Sometimes it leads to really ridiculous outcomes (especially if you never double check actual addresses and just put in a business name or whatever). In many edge cases depending on the use case, it leads to being stuck, because the maps data is wrong, or doesn’t have updated locations, or can’t consider weather conditions, etc. especially if we’re talking in the mountains or outside of major cities.

                                                                                                                                                                                                                                                                                                                                                      Doing it blindly has led to numerous people dying by stupidly getting themselves into more and more dumb situations.

                                                                                                                                                                                                                                                                                                                                                      People still got stuck using paper maps. Sometimes they even died. It was much rarer and people were more aware they were lost, instead of persisting thinking they weren’t. So different failure modes.

                                                                                                                                                                                                                                                                                                                                                      Paper maps were very inconvenient, so dealt with it using more human interaction and adding more buffer time. Which had it’s own costs.

                                                                                                                                                                                                                                                                                                                                                      In areas where there are active bad actors (Eastern Europe now a days, many other areas in that region sometimes) it leads to actively pathological outcomes.

                                                                                                                                                                                                                                                                                                                                                      It is now rare for anyone outside of conflict zones to use paper maps except for specific commercial and gov’t uses, and even then they often use digitized ‘paper’ maps.

                                                                                                                                                                                                                                                                                                                                                      • defgeneric 2 days ago

                                                                                                                                                                                                                                                                                                                                                        This is exactly the problem, but there's still a sweet spot where you can get quickly up to speed on a technical areas adjacent to your specialty and not have small gaps in your own knowledge hold you back from the main task. I was quickly able to do some signal processing for underwater acoustics in C, for example, and don't really plan to become highly proficient in it. I was able to get something workable and move on to other tasks while still getting an idea of what was involved if I ever wanted to come back to it. In the past I would have just read a bunch of existing code.

                                                                                                                                                                                                                                                                                                                                                        • Manik_agg 2 days ago

                                                                                                                                                                                                                                                                                                                                                          I agree. Asking LLM to write for you is being lazy and it also results in sub-par results (don't know about brain-rot).

                                                                                                                                                                                                                                                                                                                                                          I also like preparing a draft and using llm for critique, it helps me figure out some blind spots or ways to articulate better.

                                                                                                                                                                                                                                                                                                                                                        • sudosteph 2 days ago

                                                                                                                                                                                                                                                                                                                                                          Meanwhile my main use cases for AI outside of work:

                                                                                                                                                                                                                                                                                                                                                          - Learning how to solder

                                                                                                                                                                                                                                                                                                                                                          - Learning how to use a multimeter

                                                                                                                                                                                                                                                                                                                                                          - Learning to build basic circuits on breadboxes

                                                                                                                                                                                                                                                                                                                                                          - learning about solar panels, mppt, battery management system, and different variations of li-on batteries

                                                                                                                                                                                                                                                                                                                                                          - learning about LoRa band / meshtastic / how to build my own antenna

                                                                                                                                                                                                                                                                                                                                                          And every single one of these things I've learned I've also applied practically to experiment and learn more. I'm doing things with my brain that I couldn't do before, and it's great. When something doesn't work like I thought it would, AI helps me understand where I may have went wrong, I ask it a ton of questions, and I try again until I understand how it works and how to prove it.

                                                                                                                                                                                                                                                                                                                                                          You could say you can learn all of this from YouTube, but I can't stand watching videos. I have a massive textbook about electronics, but it doesn't help me break down different paths to what I actually want to do.

                                                                                                                                                                                                                                                                                                                                                          And to be blunt: I like making mistakes and breaking things to learn. That strategy works great for software (not in prod obviously...), but now I can do it reasonably effectively for cheap electronics too.

                                                                                                                                                                                                                                                                                                                                                          • nancyminusone 2 days ago

                                                                                                                                                                                                                                                                                                                                                            As someone who does these things, I am curious to know how and why you would choose AI.

                                                                                                                                                                                                                                                                                                                                                            Working these from text seems to be the hardest way I could think to learn them. I've yet to encounter a written description as to what it feels like to solder, what a good/bad job actually looks like, etc. A well shot video is much better at showing you what you need to do (although finding one is getting more and more difficult)

                                                                                                                                                                                                                                                                                                                                                            • sudosteph 2 days ago

                                                                                                                                                                                                                                                                                                                                                              I just process text information better. Videos are kind of overstimulating and often have unrelated content, and I hate having to rewind back to a part I need while I'm in the middle of something. With LLMs I can get a broad overview of what I'm doing, tell it what materials I already have on hand and get specific ideas for how to practice. Soldering is probably one of the harder ones to learn by text, but the description of the techniques to use were actually really understandable (use flux, be sure the tip is tinned, touch the pad with the tip to warm it up a little, touch again with the iron on one side of the pad and insert the solder in on the other side and it gets drawn in, pull away (timing was trial and error). And then I'd upload a picture of what I did for review and it would point out the ones that had issues and what likely went wrong to cause it (ex: solder sticking to the top of the iron and not the pad), and I would keep practicing and test that it worked and looked like what was described. It may not be the ideal technique or outcome, but it unblocked me relatively quick so I could continue my project.

                                                                                                                                                                                                                                                                                                                                                              Being able to ask it stupid questions and edge cases is also something I like with LLMs, like I would propose a design for something (ex: a usb battery pack w/ lifepo4 batts that could charge my phone and be charged by solar at the same time), it would say what it didn't like about my design, counter with its own, then I would try to change aspects of their design to see "what would happen if .." and it would explain why it chose a particular component or design choice and what my change would do and the trade-offs, risks, etc other paths to building it with that, etc. Those types of interactions are probably the best for me actually understanding things, helps me understand limitations and test my assumptions interactively.

                                                                                                                                                                                                                                                                                                                                                              • efreak a day ago

                                                                                                                                                                                                                                                                                                                                                                > I just process text information better. Videos are kind of overstimulating and often have unrelated content, and I hate having to rewind back to a part I need while I'm in the middle of something.

                                                                                                                                                                                                                                                                                                                                                                Rant:

                                                                                                                                                                                                                                                                                                                                                                I _hate_ video tutorials. With a passion. If you can't be bothered to show pictures of how to use your product with a labeled diagram/drawing/photo of the buttons or connections, then I either won't buy it or I'll return it. I hate video reviews. I hate video repair instructions. I hate spending 15 minutes jumping back and forth between two segments of a YouTube video, trying to find the exact correct frame each time so I can see what button the person is touching while listening to their blather so I don't miss the keyword I heard last time, just so I can see what two different sections when I could have had two pictures on screen at the same time (if I was on desktop, this would be a trivial fix, but not so much on mobile). I hate having VPNs and other products being advertised at me in ways that actively disrupt my chain of thought (vs static instead that I can ignore/scroll past). I hate not being able to just copy and paste a few simple instructions and an image for procedures that I'll have to repeat weekly. It would have taken you less effort to create, and I'd be more likely to pay you for your time.

                                                                                                                                                                                                                                                                                                                                                                YouTube videos are like flash-based banner ads, but worse. Avoid them like the plague.

                                                                                                                                                                                                                                                                                                                                                                End rant.

                                                                                                                                                                                                                                                                                                                                                            • stripe_away 2 days ago

                                                                                                                                                                                                                                                                                                                                                              and to be blunt, I learned similar things building analog synths, before the dawn of LLMs.

                                                                                                                                                                                                                                                                                                                                                              Like you, I don't like watching videos. However, the web also has text, the same text used to train the LLMs that you used.

                                                                                                                                                                                                                                                                                                                                                              > When something doesn't work like I thought it would, AI helps me understand where I may have went wrong, I ask it a ton of questions, and I try again until I understand how it works and how to prove it.

                                                                                                                                                                                                                                                                                                                                                              Likewise, but I would have to ask either the real world or written docs.

                                                                                                                                                                                                                                                                                                                                                              I'm glad you've found a way to learn with LLMs. Just remember that people have been learning without LLMs for a long time, and it is not at all clear that LLMs are a better way to learn than other methods.

                                                                                                                                                                                                                                                                                                                                                              • sudosteph 2 days ago

                                                                                                                                                                                                                                                                                                                                                                The asking people part was the hard thing for me, always has been. That honestly was the missing piece for me. I absolutely agree that written docs and online content are sufficient for some people, that's how I learned Linux and sysadmin stuff, but I tried on and off to get into electronics for years that way and never got anywhere.

                                                                                                                                                                                                                                                                                                                                                                I think the problem was all of the getting started guides didn't really solve problems I cared about, they're just like "see, a light! isn't that neat?" and then I get bored and impatient and don't internalize anything. The textbooks had theory but so much of it I would forget most of it before I could use it and actually learn. Then when I tried to build something actually interesting to me, I didn't actually understand the fundamentals, it always fails, Google doesn't help me find out why because it could be a million things and no human in my life understands this stuff either, so I would just go back to software.

                                                                                                                                                                                                                                                                                                                                                                It could be LLMs are at least possibly better for certain people to learn certain things in certain situations.

                                                                                                                                                                                                                                                                                                                                                                • chaps 2 days ago

                                                                                                                                                                                                                                                                                                                                                                    > However, the web also has text, the same text used to train the LLMs that you used.
                                                                                                                                                                                                                                                                                                                                                                  
                                                                                                                                                                                                                                                                                                                                                                  The person you're responding to isn't denying that other people learn from those. But they're explicit that having the text isn't helpful either:

                                                                                                                                                                                                                                                                                                                                                                    > I have a massive textbook about electronics, but it doesn't help me break down different paths to what I actually want to do.
                                                                                                                                                                                                                                                                                                                                                                • dns_snek a day ago

                                                                                                                                                                                                                                                                                                                                                                  Your use of LLMs is distinctly different than the use being described here (in a good way).

                                                                                                                                                                                                                                                                                                                                                                  You might ask "What do I need to pay attention to when designing this type of electronic circuit", the people at risk of cognitive decline instead ask "design this electronic circuit for me".

                                                                                                                                                                                                                                                                                                                                                                  I firmly believe that the the latter group will suffer observable cognitive decline over the span of a few years unless they continue to exercise their brain in the same ways they used to, and I think the majority won't bother to do that - why spend much effort when little effort do trick?

                                                                                                                                                                                                                                                                                                                                                                  • defgeneric 2 days ago

                                                                                                                                                                                                                                                                                                                                                                    The physicality of having to actually do things in the real world slows things down to the rate at which our brains actually learn. The "vibe coding" loop is too fast to learn anything, and ends up teaching your brain to avoid the friction of learning.

                                                                                                                                                                                                                                                                                                                                                                    • aprilthird2021 2 days ago

                                                                                                                                                                                                                                                                                                                                                                      Cool, but most people will get brain rotted by this. It's the same way we constantly talk about how social media is probably bad for people then some commenter comes and says he's not addicted and there's no other way he could communicate with his high school friends who live overseas and know about their lives. Not everyone will only get the positives out of any technology

                                                                                                                                                                                                                                                                                                                                                                      • amelius 2 days ago

                                                                                                                                                                                                                                                                                                                                                                        Yeah, if you're using LLMs like an apprentice who asks their master, then there's nothing wrong with that, imho.

                                                                                                                                                                                                                                                                                                                                                                        • fxwin 2 days ago

                                                                                                                                                                                                                                                                                                                                                                          Same here. I've been working through some textbooks without solutions for the contained exercises, and ChatGPT has been invaluable for getting feedback on solutions and hints when I'm stuck

                                                                                                                                                                                                                                                                                                                                                                          • kapone 2 days ago

                                                                                                                                                                                                                                                                                                                                                                            > - Learning how to solder - Learning how to use a multimeter - Learning to build basic circuits on breadboxes - learning about solar panels, mppt, battery management system, and different variations of li-on batteries - learning about LoRa band / meshtastic / how to build my own antenna

                                                                                                                                                                                                                                                                                                                                                                            And yet...somehow...humans have been able to learn and do these things (and do them well) for ages, with no LLMs around (or the stupid amount of capital being burned at the LLM stake).

                                                                                                                                                                                                                                                                                                                                                                            And I want to hit the next person with a broom or something, likely over and over again, who says LLMs = AI.

                                                                                                                                                                                                                                                                                                                                                                            /facepalm.

                                                                                                                                                                                                                                                                                                                                                                          • Jimmc414 2 days ago

                                                                                                                                                                                                                                                                                                                                                                            Considerable amount of methodology issues here for a study with this much traction. Only 54 participants split three ways into groups of 18, with just 9 people per condition in the crossover. Far too small for claims about "brain reprogramming."

                                                                                                                                                                                                                                                                                                                                                                            The study shows different brain patterns during AI-assisted writing, not permanent damage. Lower EEG activity when using a tool is expected just as showing less mental math activity when using a calculator.

                                                                                                                                                                                                                                                                                                                                                                            The study translates temporary, task-specific neural patterns into "cognitive decline" and "severe cognitive harm." The actual study measured brain activity during essay writing, not lasting changes.

                                                                                                                                                                                                                                                                                                                                                                            Plus, surface electrical measurements can't diagnose "cognitive debt" or deep brain changes. The authors even acknowledge this. Also, "83.3% couldn't quote their essay" equates to 15 out of 18 people?

                                                                                                                                                                                                                                                                                                                                                                            • tim333 a day ago

                                                                                                                                                                                                                                                                                                                                                                              Thank you for summarizing that. I guessed there must be some issues but didn't want to read the thing.

                                                                                                                                                                                                                                                                                                                                                                            • planetmcd 2 days ago

                                                                                                                                                                                                                                                                                                                                                                              This article was probably written by AI, because anyone with half a brain could not read the study and come to the same conclusions.

                                                                                                                                                                                                                                                                                                                                                                              Basically, participants spent less than half an hour, 4 times, over 4 months, writing some bullcrap SAT type essay. Some participants used AI.

                                                                                                                                                                                                                                                                                                                                                                              So to accept the premise of the article, using an AI tool once a month for 20 minutes caused noticeable brain rot. It is silly on its face.

                                                                                                                                                                                                                                                                                                                                                                              What the study actually showed, people don't have an investment or strong memory to output they didn't produce. Again, this is a BS essay written (mostly by undergrads) in 20 minutes, so not likely to be deep in any capacity. So to extrapolate, if you have a task that requires you to understand the output, you are less likely to have a grasp of it if you didn't help produce the output. This would also be true of work some other person did.

                                                                                                                                                                                                                                                                                                                                                                              • marcofloriano 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                > What the study actually showed, people don't have an investment or strong memory to output they didn't produce.

                                                                                                                                                                                                                                                                                                                                                                                Problem with LLMs is, when you pass hours feeding prompts to solve a problem, you actually did help (a lot!) to produce the output.

                                                                                                                                                                                                                                                                                                                                                                                • planetmcd 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                  I agree, the study didn't do that or have any thoughts on that.

                                                                                                                                                                                                                                                                                                                                                                              • epolanski 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                I can't but think this has to be tied to _how_ AI is used.

                                                                                                                                                                                                                                                                                                                                                                                I actively use AI to research, question and argue a lot, this pushes me to reason a lot more than I normally would.

                                                                                                                                                                                                                                                                                                                                                                                Today's example: - recognize docs are missing for a feature - have AI explore the code to figure out what's happening - back and forth for ours trying to find how to document, rename, refactor, improve, write mermaid charts, stress over naming to be as simple as possible

                                                                                                                                                                                                                                                                                                                                                                                The only step I'm doing less is the exploration/search one, because an LLM can process a lot more text than I can at the same time. But for every other step I am pushing myself to think more and more profoundly than I would without an LLM because gathering the same amount of information would've bene too exhausting to proceed with this.

                                                                                                                                                                                                                                                                                                                                                                                Sure, it may have spared me to dig into mermaid too, for what is worth.

                                                                                                                                                                                                                                                                                                                                                                                So yes, lose some, win others, albeit in reality no work would've been done at all without the LLM enabling it. I would've moved to another mundane task such as "update i18 formatting of date for swiss german customers".

                                                                                                                                                                                                                                                                                                                                                                                • eviks 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                  No, vibe science is not as powerful as to be able to determine "long-term cognitive harm", especially when such "technical wonders" as "measurable through EEG brain scans." are used.

                                                                                                                                                                                                                                                                                                                                                                                  > 83.3% of LLM users were unable to quote even one sentence from the essay they had just written

                                                                                                                                                                                                                                                                                                                                                                                  Not sure why you need to wire EEG up, it's pretty obvious that they simply did _not_ write the essay, LLM did it for them, and likely didn't even read it, so there is no surprise that they don't remember what didn't pass through their own thinking apparatus properly.

                                                                                                                                                                                                                                                                                                                                                                                  • matwood 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                    I write all the time and couldn't quote anything off hand. What I can talk about are the ideas in the writing. I find LLMs useful as an editor. Here's what I want to say, is it clear or are there better words, etc... And then I never take the output blindly, and depending on how important the writing is I may go back and forth line by line.

                                                                                                                                                                                                                                                                                                                                                                                    The idea that I would say 'write an essay on X' and then never look at the output is kind of wild. I guess that's vibe writing instead of vibe coding.

                                                                                                                                                                                                                                                                                                                                                                                  • puilp0502 2 days ago
                                                                                                                                                                                                                                                                                                                                                                                    • chychiu 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                      Was going to comment the same but you beat me to it!

                                                                                                                                                                                                                                                                                                                                                                                      On that note, reading the ChatGPT-esque summary in the linked article gave me more brain damage than any AI I've used so far

                                                                                                                                                                                                                                                                                                                                                                                      • causal 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                        The irony. It isn't even a new study. Way too much has been written about this flawed study when we should just be doing more studies.

                                                                                                                                                                                                                                                                                                                                                                                      • jennyholzer 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                        There are dozens of duplicates for pro-AI dreck, so this post should stand.

                                                                                                                                                                                                                                                                                                                                                                                        • ayhanfuat 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                          We can at least change the link to the actual paper instead of a vaccine denier's AI generated summary.

                                                                                                                                                                                                                                                                                                                                                                                          • causal 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                            Instead of trying to balance dreck can we just... not upvote any dreck

                                                                                                                                                                                                                                                                                                                                                                                            • fortyseven 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                              Being anti-AI drivel is completely fine though.

                                                                                                                                                                                                                                                                                                                                                                                          • gandalfgeek 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                            The coverage of this has been so bad that the authors have had to put up an FAQ[1] on their website, where the first question is the following:

                                                                                                                                                                                                                                                                                                                                                                                            Is it safe to say that LLMs are, in essence, making us "dumber"? No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "brain damage", "passivity", "trimming" , "collapse" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it.

                                                                                                                                                                                                                                                                                                                                                                                            [1]: https://www.media.mit.edu/projects/your-brain-on-chatgpt/ove...

                                                                                                                                                                                                                                                                                                                                                                                            • marcofloriano 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                              It's actually so safe to say that such a small study like that can point out clearly the fact. But of course, as it is a very sensitive topic, 'the language' and 'the narrative' should be carefully chosen, or you can be 'banned'. Off course we wont see new studies like that anytime soon.

                                                                                                                                                                                                                                                                                                                                                                                            • misswaterfairy 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                              I can't say I'm surprised by this. The brain is, figuratively speaking, a muscle. Learning through successes and (especially) failures is hard work, though not without benefit, in that the trials and exercises your brain works through exercises the 'muscle', making it stronger.

                                                                                                                                                                                                                                                                                                                                                                                              Using LLMs to do replace the effort we would've otherwise endured to complete a task short-circuits that exercising function, and I would suggest is potentially addictive because it's a near-instant reward for little work.

                                                                                                                                                                                                                                                                                                                                                                                              It would be interesting to see a longitudinal study on the affect of LLMs, collective attention spans, and academic scores where testing is conducted on pen and paper.

                                                                                                                                                                                                                                                                                                                                                                                              • onlyrealcuzzo 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                Sounds bullish for AI.

                                                                                                                                                                                                                                                                                                                                                                                                It's like a drug. You start using it, and think you have super powers, and then you've forgotten how to think, and you need AI just to maybe be as smart as you were before.

                                                                                                                                                                                                                                                                                                                                                                                                Every company will need enterprise AI solutions just to maybe get the same amount of productivity as they got before without it.

                                                                                                                                                                                                                                                                                                                                                                                                • jugg1es 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                  This is sad but true.

                                                                                                                                                                                                                                                                                                                                                                                                  • kjkjadksj 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                    And the pipeline is cooked now with some universities now allowing for AI use. It’s like what cliffnotes did for reading comprehension but over all aspects of life and all domains. What a coming tsunami.

                                                                                                                                                                                                                                                                                                                                                                                                • infecto 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                  Everyone is different. I don’t have a good grasp on the distribution of HN readers these days but I know for myself as a heavy user of LLMs, I am not sold on this for myself. I am asking more questions than ever. I use it for proof reading and editing. But I can see the risk as a software engineer. I really appreciate tools like cursor, I give it bite size chunks and review. Using tools like Claude code though. It becomes a black box and I no longer feel at the helm of the ship. I could see if you outsourced all thinking to an LLM there can be consequences. That said I am not sold on the paper and suspects it’s mostly hyperbole.

                                                                                                                                                                                                                                                                                                                                                                                                  • Taek 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                    Cognitive decline is a broad term, and a research paper could claim "decline" if even a single cognitive metric loses strength.

                                                                                                                                                                                                                                                                                                                                                                                                    When writing was invented, societies started depending on long form memorization less, which is a cognitive "decline". When calculators were invented, societies started depending on mental math less, which is a cognitive "decline".

                                                                                                                                                                                                                                                                                                                                                                                                    I'm sure LLMs are doing the same thing. People aren't getting dumber, they are just outsourcing tasks more, so that their brains spend more time on the tasks that can't be outsourced.

                                                                                                                                                                                                                                                                                                                                                                                                    • yuehhangalt 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                      My concern is more attributed to the tasks that can't or won't be outsourced.

                                                                                                                                                                                                                                                                                                                                                                                                      People who maintain a high level of curiosity or a have drive to create things will most assuredly benefit from using AI to outsource work that doesn't support those drives. It has the potential to free up more time for creative endeavors or those that require more deep thinking. Few would argue the benefit there.

                                                                                                                                                                                                                                                                                                                                                                                                      Unfortunately, anti-intellectualism is rampant, media literacy is in decline, and a lot of people are content to consume content and not think unless they absolutely have to. Dopamine is a helluva drug.

                                                                                                                                                                                                                                                                                                                                                                                                      If LLMs reduce the cognitive effort at work, and the people go home to doom scroll on social media or veg out in front of their streaming media of choice, it seems that we're heading down the path of creating a society of mindless automatons. Idiocracy is cited so often today that I hate to do so myself, but it seems increasingly prescient.

                                                                                                                                                                                                                                                                                                                                                                                                      Edit: I also don't think that AI will enable a greater work-life harmony. The pandemic showed that a large number of jobs could effectively be done remotely. However, after the pandemic, there was significant "Return to Office" movement that almost seemed like retribution for believing we could achieve a better balance. Corporations won't pass on the time savings to their employees and enable things like 4-day work weeks. They'll simply expect more productivity from the employees they have.

                                                                                                                                                                                                                                                                                                                                                                                                      • IAmBroom 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                        Absolutely true.

                                                                                                                                                                                                                                                                                                                                                                                                        Also, domesticated dogs show indications of lower intelligence and memory than wolves. They don't have to plan complex strategies to find and kill food, anymore.

                                                                                                                                                                                                                                                                                                                                                                                                        • Taek 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                          The difference between us and dogs is that we DO still need to make a salary. Dogs live in a lap of luxury where their needs are guaranteed to be handled.

                                                                                                                                                                                                                                                                                                                                                                                                          But humans need jobs, and jobs need to capture value from society. So we do actually still have to stay sharp, whatever form "sharp" takes.

                                                                                                                                                                                                                                                                                                                                                                                                          • pessimizer 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                            You and dogs have the same job, which is to please the boss. The boss then takes care of you like a child, either with a paycheck (with which you can pay servants to supply your earthly needs), or directly if you're a dog and lack both thumbs and pockets to hold a wallet or a phone. A domestic dog would die left alone in a forest, about two or three weeks after you would.

                                                                                                                                                                                                                                                                                                                                                                                                            If you're an entrepreneur, your job is to please the customer and to squeeze your vendors and employees. You still take little to no part in directly taking care of yourself, except as a hobby. Unless you want to be congratulated for wiping your own ass or lifting a fork to your mouth.

                                                                                                                                                                                                                                                                                                                                                                                                        • infecto 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                          This is super interesting and I had not thought about it like that!

                                                                                                                                                                                                                                                                                                                                                                                                        • ceejayoz 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                          > I am asking more questions than ever.

                                                                                                                                                                                                                                                                                                                                                                                                          Wouldn't that be the expected result here? Less knowledge, more questions?

                                                                                                                                                                                                                                                                                                                                                                                                          • infecto 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                            That’s one interpretation, but I think there’s a distinction between “asking more questions because I’ve forgotten things” and “asking more questions because I’m exploring further.”

                                                                                                                                                                                                                                                                                                                                                                                                            When I use LLMs, it’s less about patching holes in my memory and more about taking an idea a few steps further than I otherwise might. For me it’s expanding the surface area of inquiry, not shrinking it. If the study’s thesis were true in my case, I’d expect to be less curious, not more.

                                                                                                                                                                                                                                                                                                                                                                                                            Now that said I also have a healthy dose of skepticism for all output but I find for the general case I can at least explore my thoughts further than what I may have done in the past.

                                                                                                                                                                                                                                                                                                                                                                                                            • rwnspace 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                              In my personal experience new knowledge tends to beget questions.

                                                                                                                                                                                                                                                                                                                                                                                                            • xnorswap 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                              > I am asking more questions than ever.

                                                                                                                                                                                                                                                                                                                                                                                                              I don't have a dog in this fight, but "asking more questions" could be evidence of cognitive decline if you're having to ask more questions than ever!

                                                                                                                                                                                                                                                                                                                                                                                                              It's easy to twist evidence to fit biases, which is why I'd hold judgement to better evidence comes through.

                                                                                                                                                                                                                                                                                                                                                                                                              • IAmBroom 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                Well, that's certainly a take.

                                                                                                                                                                                                                                                                                                                                                                                                                But if I'm teaching a class, and one student keeps asking questions that they feel the material raised, I don't tend to think "brain damage". I think "engaged and interested student".

                                                                                                                                                                                                                                                                                                                                                                                                                • charlie-83 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                  Not OP but there's a difference between needing to ask more questions and asking more questions because its easier now.

                                                                                                                                                                                                                                                                                                                                                                                                                  Personally, I find myself often asking AI about things I wouldn't have been bothered to find out about before.

                                                                                                                                                                                                                                                                                                                                                                                                                  For example I've always these funny little grates on the outside of houses near me and wondered what they are. Googling "little grates outside houses" doesn't help at all. Give AI a vagueish description and it instantly tells you they are old boot scapers.

                                                                                                                                                                                                                                                                                                                                                                                                                  • infecto 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                    Haha you nailed it. Walking around and experiencing the world I can now ask a vague question and usually find an answer.

                                                                                                                                                                                                                                                                                                                                                                                                                    Maybe there is a movie in the back of my head or a song. Typical search engine queries would never find it. I can give super vague references to a LLM and with search enabled get an answer that’s correct often enough.

                                                                                                                                                                                                                                                                                                                                                                                                                    • danenania 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                      The ability to keep following the thread and interrogating the answers is also very valuable. You never have to accept an answer you only half understand.

                                                                                                                                                                                                                                                                                                                                                                                                                  • infecto 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                    Fair point, though I think there’s a difference between “questions out of confusion” and “questions out of curiosity.”

                                                                                                                                                                                                                                                                                                                                                                                                                    If I’m constantly asking “what does this mean again?” that would signal decline. But if I’m asking “what if I combine this with X?” or “what are the tradeoffs of Y?” that feels like the opposite: more engagement, not less.

                                                                                                                                                                                                                                                                                                                                                                                                                    That’s why I’m skeptical of blanket claims from one study, the lived experience doesn’t map so cleanly.

                                                                                                                                                                                                                                                                                                                                                                                                                • jennyholzer 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                  > In post-task interviews:

                                                                                                                                                                                                                                                                                                                                                                                                                  > 83.3% of LLM users were unable to quote even one sentence from the essay they had just written.

                                                                                                                                                                                                                                                                                                                                                                                                                  > In contrast, 88.9% of Search and Brain-only users could quote accurately.

                                                                                                                                                                                                                                                                                                                                                                                                                  > 0% of LLM users could produce a correct quote, while most Brain-only and Search users could.

                                                                                                                                                                                                                                                                                                                                                                                                                  Reminds me of my coworkers who have literally no idea what Chat GPT put into their PR from last week.

                                                                                                                                                                                                                                                                                                                                                                                                                  • aurareturn 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                    Maybe we should question the value of essays in the ChatGPT world?

                                                                                                                                                                                                                                                                                                                                                                                                                    Could a person, armed with ChatGPT, come up with a better solution in a real world problem than without ChatGPT? Maybe that's what actually matters.

                                                                                                                                                                                                                                                                                                                                                                                                                    • Ekaros 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                      Can they evaluate if the idea that came up with is better if they do not remember how it was stated? Isn't point of writing actually to formulate down the thoughts in communicable manner. And then possibly to be verified by others.

                                                                                                                                                                                                                                                                                                                                                                                                                      But how can they discuss any content if even the "writer" does not remember what they wrote.

                                                                                                                                                                                                                                                                                                                                                                                                                      • kibwen 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                        The point of writing essays is not to produce an essay, it's to demonstrate that you understand something well enough to engage with it critically, in addition to being an exercise for critical thinking itself.

                                                                                                                                                                                                                                                                                                                                                                                                                        • abirch 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                          College was transformed from an apprentice style institution of the 1500s to mass produced thing of the early 2000s (where a professor can "teach" 500 students in a class).

                                                                                                                                                                                                                                                                                                                                                                                                                          I think a return to the apprentice style of institution where people try to create the best real world solution as possible with LLMs, 3D printers, etc. Then use recorded college courses like our grandparents used books.

                                                                                                                                                                                                                                                                                                                                                                                                                      • colincooke 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                        It is worth noting that this study was tbh pretty poorly performed from a psychology/neuroscience perspective and the neuro community was kind of roasting their results as uninterpretable.

                                                                                                                                                                                                                                                                                                                                                                                                                        Their trial design and interpretation of results is not properly done (i.e. they are making unfair comparison of LLM users to non-LLM users), so they can't really make the kind of claims they are making.

                                                                                                                                                                                                                                                                                                                                                                                                                        This would not stand up to peer review in it's current form.

                                                                                                                                                                                                                                                                                                                                                                                                                        I'm also saying this as someone who generally does believe these declines exist, but this is not the evidence it claims to be.

                                                                                                                                                                                                                                                                                                                                                                                                                        • Shank 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                          > It is worth noting that this study was tbh pretty poorly performed from a psychology/neuroscience perspective and the neuro community was kind of roasting their results as uninterpretable.

                                                                                                                                                                                                                                                                                                                                                                                                                          Do you have links or citations to people saying these claims?

                                                                                                                                                                                                                                                                                                                                                                                                                      • ticulatedspline 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                        Cognitive offload is nothing new, if you've been around for even a little while you've likely personally experienced it.

                                                                                                                                                                                                                                                                                                                                                                                                                        just like a muscle will atrophy from disuse skills and cognitive assets, once offloaded, will similarly atrophy. People don't memorize phone numbers, gps gets you where you want to go, your IDE seamlessly helps you along so much you could never code in a text editor, your TI-89 will do most of your math homework, as a manager you direct people to do work and no longer do the work yourself.

                                                                                                                                                                                                                                                                                                                                                                                                                        We of course never really lower our absolute cognitive load by much, just shift it. each of those points has it's own knowledge base that is needed to use it but sometimes we lose general skills in favor of esoteric skills.

                                                                                                                                                                                                                                                                                                                                                                                                                        While I may now possess esoteric skills in operating my GPS, setting way-points, saving locations, entering coordinates, if I use it a lot I find I need it to get back to the hotel from just a few miles away even if I've driven the route multiple times. I'm offloading learning the route to the gps. My father on the other hand struggles to use one and if he's away he pays a lot of attention to where he's going and remembers routes better.

                                                                                                                                                                                                                                                                                                                                                                                                                        Am I dumber than him? with respect to operating the device certainly not but if we both drove separately to a new location and you took GPS from me once I got there I'd certainly look a lot dumber getting lost trying to get back without my mental crutch. I didn't have to remember the route, so I didn't. I offloaded that to the machine, and some people offload a LOT, pretty sure nobody ever drove into a lake because a paper map told them to.

                                                                                                                                                                                                                                                                                                                                                                                                                        Modern AI is only interesting insofar as it subsumes tasks that until now we would consider fundamental. Reading, writing, basic comprehension. If you let it, AI will take over these things and spoon feed you all you want. Your cognitive abilities in those areas will atrophy and you will be less cognizant of task elements where you've offloaded mental workload to the AI.

                                                                                                                                                                                                                                                                                                                                                                                                                        And we'll absolutely see more of this, people who are a wiz at using AI, know every app, get things done by reflex. but never learned or completely forgot how to do basic shit, like read a paper, order a salad off a menu in-person or book a flight and it'll be both funny and sad when it happens.

                                                                                                                                                                                                                                                                                                                                                                                                                        • kawfey 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                          The "your brain on ChatGPT" is giving the same feel as DARE's "your brain on drugs" campagign, and we now see how that went. It immediately loses any credibility for me.

                                                                                                                                                                                                                                                                                                                                                                                                                          It wasn't immediately clear what they actually had the subjects do. It seems like they wrote an essay, which...duh? I would bet brain activity would be similar -- if not identical -- as an LLM user if the subjects were asked to have the other cohorts to write their essay.

                                                                                                                                                                                                                                                                                                                                                                                                                          • causal 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                            Just look at this comment section - one flawed headline is all it takes to get hundreds of people writing essays about how they totally understand how the brain works and knew it all along.

                                                                                                                                                                                                                                                                                                                                                                                                                          • owisd 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                            I was reading The Shallows recently about how the Internet affects your brain, it's from 2009 so a bit out of date re: smartphones & a lot re: LLMs but makes the case that the Internet and hypertext generally as a tool is 'bad' for you cognitively because it puts additional load on your working memory, but offloads tasks from the parts of your brain that are useful for higher-level tasks and abstract thinking, so those more valuable skills atrophy. It contrasts this with a calculator that makes you "smarter" because it does the opposite - frees up your working memory so you have more time to focus on high-level thought. Found it quite striking because it seemed most likely LLMs and Smartphones would fit in the hypertext category and not the calculator category yet calculators is exactly what Sam Altman likes to use as an analogy to LLMs.

                                                                                                                                                                                                                                                                                                                                                                                                                            • variadix 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                              Seems obvious. If you don’t use it you lose it. Same thing happened with mental arithmetic, remembering phone numbers, etc. Letting an LLM do your thinking will make you worse at thinking.

                                                                                                                                                                                                                                                                                                                                                                                                                              • DrNosferatu 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                If you blindly trust it instead of using it as an iterative tool, I guess…

                                                                                                                                                                                                                                                                                                                                                                                                                                But didn’t pocket calculators present the same risk / panic?

                                                                                                                                                                                                                                                                                                                                                                                                                                • diddid 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                  Graphing calculators did, which is why in a lot of math classes they got banned. If your calculator can solve for x, you won’t spend time learning how to. The best math classes usually do without calculators focusing on concepts and skip numbers you’d need a calculator for.

                                                                                                                                                                                                                                                                                                                                                                                                                                  • boesboes 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                    This, I was allowed to use the grahpic mode to do integrals and differentials. It made high school easy, but in uni I had zero math skills it turned out. Had to switch studies..

                                                                                                                                                                                                                                                                                                                                                                                                                                  • wiredfool 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                    There’s a narrow band of math that’s amenable to pocket calculators. When used in that band, they can repeatably return the correct answer.

                                                                                                                                                                                                                                                                                                                                                                                                                                    • bell-cot 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                      The cognitive decline described here sounds far broader than just getting rusty at arithmetic.

                                                                                                                                                                                                                                                                                                                                                                                                                                      • jennyholzer 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                        When I enter 5 x 5 on a pocket calculator, I always get 25

                                                                                                                                                                                                                                                                                                                                                                                                                                      • Brian_K_White 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                        It's probably an effect of the transition period where today people are using ais to meet work expectations and metrics of yesterday.

                                                                                                                                                                                                                                                                                                                                                                                                                                        At some point ai will probably be like calculators where once everyone is using them for everything, that will be a new and different normal from today, and the expectations and the way of judging quality etc will be different than today.

                                                                                                                                                                                                                                                                                                                                                                                                                                        Once everyone is doing the same one weird trick as you, it's no longer useful. You can no longer pretend to be a developer or an artist etc.

                                                                                                                                                                                                                                                                                                                                                                                                                                        There will still be a sea of bottom-feeders doing the same thing, but they will just be universally recognized as cheap junk. Annd that's actually fine, kinda. There is a place and a use for cheap junk that just barely does something, the same as a cheap junky screwdriver or whatever.

                                                                                                                                                                                                                                                                                                                                                                                                                                        • ergonaught 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                          No idea whether this holds up, but the human body is all about conditioning and maximizing energy efficiency, so it should at least be unsurprising if true.

                                                                                                                                                                                                                                                                                                                                                                                                                                          My vehicle has a number of self-driving capabilities. When I used them, my brain rapidly stopped attending to the functions I'd given over, to the extent that there was a "gap" before I noticed it was about to do the wrong thing. On resumption of performing that work myself, it was almost as if I had forgotten some elements of it for a moment while my brain sorted it out.

                                                                                                                                                                                                                                                                                                                                                                                                                                          No real reason to think that outsourcing our thinking/writing/etc will cause our brains to respond any differently. Most of the "reasoned" arguments I see against that idea seem based on false equivalences.

                                                                                                                                                                                                                                                                                                                                                                                                                                          • Gareth321 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                            This is why I am not so concerned. I am old enough to remember when teachers thought that outsourcing calculations to calculators would atrophy my brain. They said the same about computers. Then the internet and Wikipedia. On one hand, yes, I am slower at calculating things by hand. On the other, it doesn't matter anymore. I am much faster at getting things accomplished. AI might just be the latest way in which humans are exploring transhumanism. Perhaps we are irreversible altering our brains. I'm just not convinced that's a terrible thing.

                                                                                                                                                                                                                                                                                                                                                                                                                                          • NiloCK 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                            Every augmentation is also an amputation.

                                                                                                                                                                                                                                                                                                                                                                                                                                            Calculators reduced our capabilities in mental and pencil-paper arithmetic. Graphing calculators later reduced our capacity to sketch curves, and in turn, our intuition in working directly with equations themselves. Power tools and electric mixers reduced our grip strength. Cheap long distance plans and electronic messaging reduced our collective abilities in long-form letter writing. The written word decimated the population of bards who could recite Homer from memory.

                                                                                                                                                                                                                                                                                                                                                                                                                                            It's not that there aren't pitfalls and failure modes to watch out for, but the framing as a "general decline" is tired, moralizing, motivated, clickbait.

                                                                                                                                                                                                                                                                                                                                                                                                                                            • add-sub-mul-div 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                              > Calculators reduced our capabilities in mental and pencil-paper arithmetic.

                                                                                                                                                                                                                                                                                                                                                                                                                                              And now people make bad decisions in their daily life about money etc. Most people can't do the math in their head but they also aren't using their calculator at the grocery store to avoid being taken advantage of. The math doesn't get done.

                                                                                                                                                                                                                                                                                                                                                                                                                                              The lesson isn't that we survived calculators, it's that they did dull us, and our general thinking and creativity are about to get likewise dulled.

                                                                                                                                                                                                                                                                                                                                                                                                                                            • abirch 2 days ago
                                                                                                                                                                                                                                                                                                                                                                                                                                              • bgwalter 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                I tried to see what the hype is about and translated one build system to another using "AI". The result was wrong, bloated and did not work. I then used smaller steps like the prompt geniuses recommend. It was exhausting, still riddled with errors, like a poor version of copy & paste.

                                                                                                                                                                                                                                                                                                                                                                                                                                                Most importantly, I did not remember anything (which is a good thing because half of the output is wrong). I then switched to Stackoverflow etc. instead of the "AI". Suddenly my mental maps worked again, I recalled what I read, programming was fun again, the results were correct and the process much faster.

                                                                                                                                                                                                                                                                                                                                                                                                                                                • badbart14 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                  I remember this paper when it came out a couple months ago. Makes a lot of sense, the use of tools like ChatGPT essentially offshore the thinking processes in your brain. I really like the analogy to time under tension they talk about in https://www.theringer.com/podcasts/plain-english-with-derek-... (they also discuss this study and some of the flaws/results with it)

                                                                                                                                                                                                                                                                                                                                                                                                                                                  • rusbus 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                    Does anyone else find it incredibly ironic that this article summarizing the paper was obviously written with AI?

                                                                                                                                                                                                                                                                                                                                                                                                                                                    All the headings and bullets and phrases like "The findings are clear:" stick out like a sore thumb.

                                                                                                                                                                                                                                                                                                                                                                                                                                                    • pjio 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                      First step out of this mess: Use AI only to proof read or get a second opinion, but not to write the whole thing.

                                                                                                                                                                                                                                                                                                                                                                                                                                                      • bookofjoe 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                        That ship has sailed.

                                                                                                                                                                                                                                                                                                                                                                                                                                                        >Everyone Is Cheating Their Way Through College. ChatGPT has unraveled the entire academic project.

                                                                                                                                                                                                                                                                                                                                                                                                                                                        https://archive.ph/ZKZiY

                                                                                                                                                                                                                                                                                                                                                                                                                                                        • jajko 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                          Its as if somebody finds shocking the fact that people are generally lazy. Then you have the other extreme group, deniers. "I work more than ever!", "I ask even more questions!" and so on here and elsewhere.

                                                                                                                                                                                                                                                                                                                                                                                                                                                          Sure you do, and maybe its really an actual benefit for ya. Not for most though. For young folks still going through education, this is devastating. If I didn't have kids I wouldn't care, less quality competition at work, but I do (too young to be affected by it now, and by the time they will be allowed to use these, frameworks for use and restrictions will be in place already).

                                                                                                                                                                                                                                                                                                                                                                                                                                                          But since maybe 30% of folks here are directly or indirectly dependent on LLMs to be pushed down every possible throat and then some more, I expect much more denial and resistance to critique of their little pets or investments.

                                                                                                                                                                                                                                                                                                                                                                                                                                                          • charlie-83 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                            It feels like all this is because the point of school/college/university is just to get a piece of paper rather than to earn skills. Why wouldn't you get chatgpt to write your essay when your only goal is to get a passing grade.

                                                                                                                                                                                                                                                                                                                                                                                                                                                            My optimistic take is that the rise of AI in education could cause more workplaces to move away from "must have xyz degree" and actually determine if the candidate has the skills needed.

                                                                                                                                                                                                                                                                                                                                                                                                                                                            • jbstack 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                              I agree with this in principle, but the problem is what happens to the in-between generation that cheats their way towards getting the piece of paper before the world moves on to a better way? At least for previous generations you got the piece of paper and you acquired some skills/knowledge.

                                                                                                                                                                                                                                                                                                                                                                                                                                                              For this reason, I don't feel as optimistic as you do. I worry instead that equality gaps will widen significantly: there will be the majority which abuses AI and graduates with empty brains, and there will be the minority who somehow manage to avoid doing that (e.g. lucky enough to have parents with sufficient foresight to take preventative measures with their children).

                                                                                                                                                                                                                                                                                                                                                                                                                                                            • sudosteph 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                              I'm one of the people who find LLMs extremely helpful from a learning perspective, but to be perfectly honest, I've met the children of complete "luddites" (no tablets, internet on home on timer for school work, not allowed phones until 16, home schooled, house filled with a million books) and they honestly were some of the more intelligent, well-read, and thoughtful young people I've met.

                                                                                                                                                                                                                                                                                                                                                                                                                                                              LLMs may end up being both educationally valuable in certain contexts for certain users, and totally unsuitable for developing brains. I would err towards caution for young minds especially.

                                                                                                                                                                                                                                                                                                                                                                                                                                                            • bgwalter 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                              Not in China:

                                                                                                                                                                                                                                                                                                                                                                                                                                                              https://nypost.com/2025/08/19/world-news/china-restricts-ai-...

                                                                                                                                                                                                                                                                                                                                                                                                                                                              "That’s because the Chinese Communist Party knows their youth learn less when they use artificial intelligence. Surely, President Xi Jinping is reveling in this leg up over American students, who are using AI as a crutch and missing out on valuable learning experiences as a result.

                                                                                                                                                                                                                                                                                                                                                                                                                                                              It’s just one of the ways China protects their youth, while we feed ours into the jaws of Big Tech in the name of progress."

                                                                                                                                                                                                                                                                                                                                                                                                                                                          • AnimalMuppet 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                            Depends on who you are and what you want.

                                                                                                                                                                                                                                                                                                                                                                                                                                                            Let's say I'm a writer of no skill who still wants attention. I could spend years learning to write better, but I still might not get any attention.

                                                                                                                                                                                                                                                                                                                                                                                                                                                            Or I could use AI to write something today. It won't be all that interesting, because AI still can't write all that well, but it may be better than I can do on my own, and I can get attention today.

                                                                                                                                                                                                                                                                                                                                                                                                                                                            If you care about your own growth (or even not dwindling) as a human, that's a trap. But not everyone cares about that...

                                                                                                                                                                                                                                                                                                                                                                                                                                                            • Bluecobra 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                              This is exactly how I use AI at work—-to quickly generate funny meme images/inside jokes for a quick chuckle. I’m no artist and probably will never be one. My digital art skills amount to drawing stick figures in MS Paint.

                                                                                                                                                                                                                                                                                                                                                                                                                                                          • Eawrig05 a day ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                            This study is so limited in scope that the title is really misleading - "AI Use Reprograms the Brain" is not really a fair assessment of the study. The study focuses on one question: what is the effect of relying on an LLM writing your essay. The answer: it makes you forget how to write a good essay. I mean, I think its obvious that if you rely on an LLM to write for you, you effectually lose the skill of writing. But what if you use an LLM to teach you a concept? Would this also lead to a cognitive decline? I don't know the answer, but I think that is a question that ought to be explored.

                                                                                                                                                                                                                                                                                                                                                                                                                                                            • ramesh31 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                              I think like a lot of people here, my posture towards AI usage over the last 2 years has gone from:

                                                                                                                                                                                                                                                                                                                                                                                                                                                              "Won't touch it, I'd never infect my codebase with whatever garbage that thing could output" -> ChatGPT for a small function here or there -> Cursor/Copilot style autocomplete -> Claude Code fully automating 90% of my tasks.

                                                                                                                                                                                                                                                                                                                                                                                                                                                              It felt like magic at first once reaching that last (current) point. In a lot of ways for certain things it still is. But it's becoming clearer and clearer that this will never be a silver bullet, and I'm ready to evolve further to "It's another tool in the toolbox to be applied judiciously when and where it makes sense, which it usually does not.". I've also come to greatly distrust anything an LLM says that isn't verified by a domain expert.

                                                                                                                                                                                                                                                                                                                                                                                                                                                              I've also felt a great amount of joy from my work go away over this time. Much as the artisans of old who were forced to sit back and supervise the automated machines taking over their craft churn out crappier versions of something faster. There's more to this than just being an old fart who doesn't want to change. We all got into this field for a reason, and a huge part of that reason is that it brings us joy. Without that joy we are going to burn out quickly, and quality is going to nosedive.

                                                                                                                                                                                                                                                                                                                                                                                                                                                              • flanbiscuit 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                > and diminished sense of ownership over their own writing.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                Anecdotally, this is how I felt when I tried out AI agents to help me write code (vibe coding). I always review the code and I ask it to break it down into smaller steps but because I didn't actually write and think of the code myself, I don't have it all in my brain. Sure, yes I can spend a lot of time really going through it and building my mental model but it's not the same (for me).

                                                                                                                                                                                                                                                                                                                                                                                                                                                                But this is also how I felt when I managed a small team once. When you start to manage more and code less, you have to let go of the fact that you have more intimate knowledge of the codebase and place that trust in your team. But at least you have a team of humans.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                AI agentic coding is like shifting your job from developer to manager. Like the article that was posted yesterday said: 'treating AI like a "junior developer who doesn't learn"' [1,2].

                                                                                                                                                                                                                                                                                                                                                                                                                                                                One good thing I like about AI is that it's forcing people to write more documentation. No more complaining about that.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                1. https://www.sanity.io/blog/first-attempt-will-be-95-garbage

                                                                                                                                                                                                                                                                                                                                                                                                                                                                2. https://news.ycombinator.com/item?id=45107962

                                                                                                                                                                                                                                                                                                                                                                                                                                                                • globular-toast 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                  Yeah, same experience here too. I "vibe coded" a project, about 3k loc including tests. But whenever I need to look at it for bugs etc it just feels like I'm looking at someone else's code. I don't have that intuition of where things are, which bits are a bit fragile, which bits might be the likely cause of an issue etc.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                  I mean, ultimately, I didn't write it myself. It's more of a "remix" of other people's code. Or like if I translated this comment into French. It wouldn't improve my French so why would vibe coding be expected to improve one's programming ability?

                                                                                                                                                                                                                                                                                                                                                                                                                                                                • SkyBelow 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                  The main issue I see is that the methodology section of the paper limited the full time to 20 minutes. Is this a study of using LLMs to write an essay for you, or using LLMs to help you write an essay? To be fair, LLMs can't be swapped between the two modes, so the distinction is left up to the user in how they engage in.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                  Thinking about it myself, and looking at the questions and time limits, I'm not sure how I would be able to navigate that distinction given only 20 minutes. The way I would use an LLM to aid me in writing an essay on the topic wouldn't fit within the time limit, so even with an LLM, I would likely stick to brain only except in a few specific case that might occur (forgetting how to spell a word or forgetting a name for a concept).

                                                                                                                                                                                                                                                                                                                                                                                                                                                                  So this study likely is applicable to similar timed instances, like letting use LLMs on a test, but that's one I would have already seen as extremely problematic for learning to begin with (granted, still worth while to find evidence to back even the 'obvious' conclusions).

                                                                                                                                                                                                                                                                                                                                                                                                                                                                  • TYPE_FASTER 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                    I used to know a bunch of phone numbers by heart. I haven't done that since I got a cellphone. Has that had an impact on my ability to memorize things? I have no idea.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                    I have recently been finding it noticeably more difficult to come up with the word I'm thinking of. Is this because I've been spending more time scrolling than reading? I have no idea.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • isodev 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                      An AI is telling me these could be symptoms of the onset of a degenerative neurological condition. Is it true? I have no idea.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • lo_zamoyski 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                      Why is this surprising? "Use it or lose it" may be a cliche, but it's true; if you don't keep some faculty conditioned, it gets "rusty". That's the general principle, so it would be surprising if this were an exception.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                      The age of social media and constant distraction already atrophies the ability to maintain sustained focus. Who reads a book these days, never mind a thick book requiring struggle to master? That requires immersion, sustained engagement, persevering through discomfort, and denying yourself indulgence in all sorts of temptations and enticements to get a cheap fix. It requires postponed gratification, or a gratification that is more subtle and measured and piecemeal rather than some sharp spike. We become conditioned in Pavlovian fashion, more habituated to such behavior, the more we engage in such behavior.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                      The reliance on AI for writing is partly rooted in the failure to recognize that writing is a form of engagement with the material. Clear writing is a way of developing knowledge and understanding. It helps uncover what you understand and what you don't. If you can't explain something, you don't know it well enough to have clear ideas about it. What good does an AI do you - you as a knowing subject - if it does the "writing" for you? You, personally, don't become wiser or better. You don't become fit by watching others exercise.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                      This isn't to say AI has no purpose, but our attitude toward technology is often irresponsible. We think that if we have the power to do something, we are missing out by not using it. This is boneheaded. The ultimate measure is whether the technology is good for you in some particular use case. Sometimes, we make prudential allowances for practical reasons. There can be a place for AI to "write" for us, but there are plenty of cases where it is simply senseless to use. You need to be prudent, or you end up abusing the technology.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                      • mansilladev 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                        “…our cognitive abilities and creative capacities appear poised to take a nosedive into oblivion.”

                                                                                                                                                                                                                                                                                                                                                                                                                                                                        Don’t sugarcoat it. Tell us how you really feel.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                        • jennyholzer 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                          I think developers who use "AI" coding assistants are putting their careers at risk.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                          • dguest 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                            And here I'm wondering if I'm putting my career at risk by not trying them out.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                            Probably both are true: you should try them out and then use them where they are useful, not for everything.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                            • Taek 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                              HN is full of people who say LLMs aren't good at coding and don't "really" produce productivity gains.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                              None of my professional life reflects that whatsoever. When used well, LLMs are exceptional and putting out large amounts of code of sufficient quality. My peers have switched entire engineering departments to LLM-first development and are reporting that the whole org is moving 2x as fast even after they fired the 50% of devs who couldn't make the switch and didn't hire replacements.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                              If you think LLM coding is a fad, your head is in the sand.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                              • bgwalter 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                The instigators say they were correct and fired the political opponents. Unheard of!

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                I have no doubt that volumes of code are being generated and LGTM'd.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                • mooxie 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  Agreed. I work for a tiny startup where I wear multiple hats, and one of them is DevOps. I manage our cloud infra with Terraform, and anyone who's scaled cloud infrastructure out of the <10 head count company to a successful 500+ company knows how critical it can be to get a wrangle on the infrastructure early. It's basically now or never.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  It used to take me days or even multiple sprints to complete large-scale infrastructure projects, largely because of having to repeatedly reference Terraform cloud provider docs for every step along the way.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  Now I use Claude Code daily. I use an .md to describe what I want in as much detail as possible and with whatever idiosyncrasies or caveats I know are important from a career of doing this stuff, and then I go make coffee and come back to 99% working code (sometimes there are syntax errors due to provider / API updates).

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  I love learning, and I love coding. But I am hired to get things done, and to succeed (both personally and in my role, which is directly tied to our organization's security, compliance, and scalability) I can't spend two weeks on my pet projects for self-edification. I also have to worry about the million things that Claude CAN'T do for me yet, so whatever it can take off of my plate is priceless.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  I say the same things to my non-tech friends: don't worry about it 'coming for your job' yet - just consider that your output and perceived worth as an employee could benefit greatly from it. If it comes down to two awesome people but one can produce even 2x the amount of work using AI, the choice is obvious.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  • 010101010101 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    Yesterday I used Warp’s LLM integrations to write two shell scripts that would have taken me longer to author myself than to do the task manually. Of the three options, this was the fastest by a wide margin.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    For this kind of low stakes, easily verifiable task it’s hard to argue against using LLMs for me.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • dguest 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      Right now I'm mostly an "admin" coder: I look at merge requests and tell people how to fix stuff. I point them to LLMs a lot too. People I know who are actually writing a lot of code are usually saying LLMs are nice.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  • 010101010101 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    Developers who don’t understand how the most basic aspects of systems they work on function are a dime a dozen already, I’m not sure LLMs change the scale of that problem.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • baq 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      fighter jet pilots who use the ejection seat are putting their careers at risk, but so are the ones who don't use it when they should.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • flanked-evergl 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      The future is increased productivity. If someone can outproduce you if they use AI, then they will take your job.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      • tmcb 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        This is industrial-grade FOMO. They will take the jobs of the first handful of people. The moment it is obvious that LLMs are a productivity booster, people will learn how to use it, just like it happened with any other technology before.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        • boesboes 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          After working with claude code for a few months, I am not worried.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          • falcor84 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            What does that mean? If you're still paying for a Claude Code, you are supposedly getting increased productivity, right? Or otherwise, why are you still using it?

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            • lexandstuff 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              I find it useful. A nice little tool in the toolkit: saves a bunch of typing, helps to over come inertia, helps me find things in unfamiliar parts of the codebase, amongst other things.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              But for it to be useful, you have to already know what you're doing. You need to tell it where to look. Review what it does carefully. Also, sometimes I find particular hairy bits of code need to be written completely by hand, so I can fully internalise the problem. Only once I've internalised hard parts of codebase can I effectively guide CC. Plus there's so many other things in my day-to-day where next token predictors are just not useful.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              In short, its useful but no one's losing a job because it exists. Also, the idea of having non-experts manage software systems at any moderate and above level of complexity is still laughable.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              • falcor84 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                I don't think the concern is that non-experts would manage large software systems, but that experts would use it to manage larger software systems on their own before needing to hire additional devs, and in that way reduce the number of available roles. I.e. it increases the "pain threshold" before I would say to myself "it's worth the hassle to hire and onboard another dev to help with this".

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          • hackable_sand 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            Blink twice if your employer is abusing you

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          • falcor84 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            I would say that the careers of everyone who views themselves as writing code for a living are already at great risk. So if you're in that situation, you have to see how to go up (or down) the ladder of abstraction, and getting comfortable with using GenAI is possibly a good way to do that.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            • unethical_ban 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              Were accountants that adopted Excel foolish?

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              Like any new tool that automates a human process, humans must still learn the manual process to understand the skill.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              Students should still learn to write all their code manually and build things from the ground up before learning to use AI as an assistant.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              • micromacrofoot 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                everyone's also telling us that if we don't use AI we're putting our careers at risk, and that AI will eventually take our jobs

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                personally I think everyone should shut up

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            • pfisherman 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              Big caveat here is how people are using the LLMs. Here they were using them for things like information recall and ideation. LLMs as producer and human as editor / curator. They did not test another (my preferred) mode of LLM use - human as producer and LLM as editor / curator.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              In this mode of use, you write out all your core ideas as stream of consciousness, bullet points or whatever without constraints of structure or style. Like more content than will make it into the essay. And then have the LLM summarize and clean it up.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              Would be curious to see how that would play out in a study like this. I suspect that the subjects would not be able to quote verbatim, but would be able to quote all the main ideas and feel a greater sense of ownership.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              • Insanity 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                So, logically, I know this is the case. I can feel it happen to myself, when I use an LLM to generate any kind of work. Although I rarely use it for coding as my job is more at a higher level (designs etc), if I have the LLM write part of a trade-off analysis, I'll remember it less and be less engaged.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                What's really bothering me though, is that I enjoy my job less when using an LLM. I feel less accomplished, I learn less, and I overall don't derive the same value out of my work.. But, on the flip side, by not adopting an LLM I'll be slower than my peers, which then also impacts my job negatively.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                So it's like being stuck between a rock and a hard place - I don't enjoy the LLM usage but feel somewhat obligated to.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                • j45 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  The gap i see is the definition of "AI use" is not clearly delineated between passive (usage similar to consumption) vs active.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  Passive AI use where you let something else think for your will obvious cause cognitive decline.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  Active use of AI as a thought partner, and learning as you go yourself seem to feel different.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  The issue with studying 18-22 year olds is their prefrontal cortex (a center of logic, will power, focus, reasoning, discipline) is not fully developed until 26. But that probably doesn't matter if the study is trying to make a point about technology.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  The art of learning fake information from real could also increase cognitive capacity.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  • blackqueeriroh 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    I’d encourage folks to listen to this podcast[1] or read the transcript which is done by two incredibly respected people, Dr. Cat Hicks, a psychologist who studies software teams, and Dr. Ashley Juavinett, who is a practicing and teaching neuroscientist. They note the many flaws in the study and discuss what actually good brain research would look like.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    1: https://www.changetechnically.fyi/2396236/episodes/17378968-...

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • dns_snek a day ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      That would not surprise me at all given what I've observed in a couple of people who outsource the thinking part to LLMs. One of them has dropped at least 20 IQ points and went from being able to grasp complex concepts with ease to needing an LLM to confirm that indeed, 2+2=4 (only somewhat hyperbolic).

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      • r3trohack3r 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        In the people around me I’ve observed:

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        AI solves the 2-sigma problem when used correctly.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        AI is extremely neurodegenerative when used incorrectly.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        The people using it as a research assistant to discover quality sources they can dive into, and as a tutor while working through those resources, are getting smarter.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        The people using it as an “oracle made from magic talking sand” are getting dumber.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        To be fair, the same thing is true of the web in general, but not to the extreme I’ve been seeing with AI.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        I’m predicting the bell curve of IQ is going to flatten quite a bit over the next decade, as people shift two sigma in both directions.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        • sigbottle 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          obviously obvious caveats like, intentional use is good, lazy use is bad, etc.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          I've found it both helpful and dangerous, it's great for expanding scope obviously, greater search engine.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          But I've also significantly noticed further some of the "harmful patterns" I guess that I would not have noticed about... myself? For example, AI is way too eager to "solve things" when given a prompt, even if you give it an abstract one. It's unable to take a step back and just.... think?

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          And hey, I notice that I do that too! Lol.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          It's helped me realize more refined "stages" of thinking I guess, even beyond just "plan" and "solve".

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          But for sure a lot of the time I'm just lazy and ask AI to just "go do it" and turn off critical thinking, hoping that it can just 1 shot the problem instead of me breaking it down. Sometimes it genuinely works. Often it doesn't.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          I think if I stay way more intentional with my thinking, I can use it to good use. Which will probably reduce AI usage - but it's the first principles of real critical thinking, not the usage of AI.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          ---

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          These kinds of studies remind me of when my parents told me "stop getting addicted to games" as a kid. Sure, anyone can observe effects, it takes real brains to really try and understand the first principles effects. Addiction went away in a flash once I understood the principles, lol.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          • whatamidoingyo 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            I've been seeing people use LLMs to reply to people on Facebook. Like, they'll just be having a general discussion, and then reply as ChatGPT. I don't know if they think it makes them look smart; I think it has the complete opposite effect.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            Not many people can perform mental arithmetic beyond single-digit numbers. Just plug it into a calculator...

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            We're at the point of people plugging their thoughts into an LLM and having it do the work for them... what's going to happen to thinking?

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            • Mistletoe 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              The future for humans worries me a lot. What evolutionary pressures will exist to keep us intelligent? We are already seeing IQ drop alarmingly across the world. Now AI comes in from the top rope with the steel chair?

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              https://www.ncbi.nlm.nih.gov/search/research-news/3283/

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              • nerpderp82 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                Why does it matter? Some will become Eloy and some Trogs.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                • latexr 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  > Why does it matter?

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  Because the people around you affect your life. Presumably you don’t want to live in a world of stupid people who are incapable of critical thought or doing anything which are not direct instructions from a machine. Think about it every time you are frustrated by your interaction with a system you have no choice but to use, such as a bank or a government branch.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  John Greene has a quote which I think fits, even if it’s about paying taxes for public education rather than LLM use: https://www.goodreads.com/quotes/1390885-public-education-do...

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  • Mistletoe 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    If you are referencing The Time Machine, I remember reading a neat comic book version of the book when I was a kid. Sometimes I feel we are quite close to having Eloi and Murlocks evolving already.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    >the gentle, childlike Eloi and the subterranean, predatory Morlocks.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    Seems like a nice metaphor for the current two political parties we are provided with.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • latexr 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      > a neat comic book version

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      Wikipedia lists several. Do you recall which you read?

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      https://en.wikipedia.org/wiki/The_Time_Machine#Comics

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      • Mistletoe a day ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        I can't find them again and I have tried. It was these little books with black and white art and it was of classic novels, Frankenstein, etc. I can still see some of the images when I think of the books. It inspired a love in me for those stories that has lasted a lifetime.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        My Mom was a special ed teacher and they were in her classroom as a set. I would go read them after school. Google Gemini suggested Classics Illustrated but I don't think that is it. These were black and white and cheaper than that. Something a teacher would have in their classroom.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        Edit: Upon chiding Google Gemini and reminding it that it was black and white I think it found it!

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        Pocket Classics Comics From 1984.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        https://gentlyhewstone.wordpress.com/2016/06/02/pocket-class...

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        https://www.ebay.com/itm/286295230816

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        Score one for AI because google search never found those for me.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        • latexr a day ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          Thank you for your effort in finding those and providing links, much appreciated. The thing that immediately jumps out at me is the awful font choice for a comic. It really makes a difference.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                • wslh 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  Calculators either? [1]. To be fair, we can find articles in favor and against the same tool.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  [1] https://www.cell.com/trends/cognitive-sciences/abstract/S136...

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  • hoppp 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    Chatting with vibe coders on reddit, I can definitely tell.. although my hunch is that a lot of people "not smart" enough to learn to program will be entering the field calling themselves programmers.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    I think maybe they are project managers since the programming is outsourced to Ai, but the idea don't seem to catch on there

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • digitcatphd 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      So users are more detached from their work? How does this correspond with cognitive decline? Wouldn’t it need to be cross referenced in other areas beside the task at hand? Seems a bit of a headline grabbing study to me. Personally I find thinking with an LLM helps me take a more structured and unbiased approach to my thought process

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      • babycheetahbite 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        Does anyone have any suggestions for approaches they are taking to avoid the potential for this? Something I did recently in ChatGPT's 'Instructions' box (so far I have only used ChatGPT) is requesting it to "Make me think through the problem before just giving me the answer." and a few other similar notes.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        • blackqueeriroh 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          Nothing, because it’s a poorly designed study that has no peer review and has been absolutely derided in the scientific community for the conflicts of interest the author has and how sensationally she spun it without peer review

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          • deadbabe 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            At the very least, don’t use LLMs tightly integrated into your IDE. Keep them at arms length, use them the way you use a search engine.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            • spruce_tips 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              yes, in a software engineering context, always use ask mode instead of agent mode unless youre truly doing dumb, tedious work that youve done many times before

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            • teekert 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              Anybody who has tried to shortcut themselves into a report on something using an LLM, and was then asked to defend the plans contained within it knows that writing is thinking. And if you outsource the writing, you do less thinking and with less thinking there is less understanding. Your mental model is less complete, less comprehensive.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              I wouldn't call it "cognitive decline", more "a less deep understanding of the subject".

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              Try solving bugs from your vibe coded projects... It's pain, you haven't learned anything while you build something. And as a result you don't fully grasp how your creation works.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              LLM are tools, but also shortcuts, and humans learn by doing ¯\_(ツ)_/¯

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              This is pretty obvious to me after using LLMs for various tasks over the past years.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              • jennyholzer 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                This dynamic is frustrating on the individual level, but it is poisonous on the organizational level.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                I am offended by coworkers who submit incompletely considered, visibly LLM generated code.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                These coworkers are dragging my team down.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                • gkilmain 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  I find this acceptable if your coworkers are checked out and looking for that next big thing

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  • warmedcookie 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    On the bright side, if you are forced to write AI code, at least reviewing PRs of AI generated slop gives your brain an exercise, albeit a frustrating one.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • teekert 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      I'm sure they are, but maybe they just need some guidance. I was fortunate to learn this by myself, but when you just start out, it feels like magic. Only later do you realize you have also sacrificed something.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  • gandalfgeek 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    The title of the study is provocatively framed and the actual findings don't live up to it. I made a short video explaining it-- https://www.youtube.com/watch?v=hLDCi0VwyiQ

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • grim_io 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      I have never used LLM's to write essays, so I can't comment on that.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      What I can comment on is how valuable and energizing it is for me to cooperatively code with LLM's using agents.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      I find it sad to hear when someone finds this experience disappointing, and I wonder what could go wrong to make it so.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      • grugagag 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        I don’t thik someone finds this experience dissapointing but harmful for cognition, probably in the long run as the cognition ‘muscle’ athrophies in some regions as I see it. Remains to be seen how it pans out. However, how much would you be willing to pay for LLMs before you decide it’s not worth it? It is unexpensive at this stage but this won’t last.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        • grim_io 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          Unless they get Thanos snapped out of existence, I would probably just switch to inferior local models.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          Going back to pre-LLM is not an option for me. Not because I can't, but because I don't want to.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          How are you people using AI? I still have to think a lot. The biggest change is that I don't run around in circles trying to fix annoying bugs.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      • rekrsiv 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        I believe this is true for literally anything that replaces practice. We're meant to build muscle memory for things through repetition, but if we sidestep the repetition by farming it out to another process, we never build muscle memory.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        • blackqueeriroh 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          Data doesn’t support it

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          • rekrsiv 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            Feel free to point to said data.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        • lif 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          What are the costs of convenience? Surely most LLM use by consumers leans into that heavily.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          • vonneumannstan 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            No different than Socrates complaining about students using writing ruining their memory.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            • siliconc0w 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              Isn't it obvious that you use your brain less to generate an essay with AI vs writing it manually?

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              I think what you'd want to measure is someone completing a task manually and someone completing n times the tasks with a copilot.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              • hnpolicestate 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                I've stopped thinking to formulate content. I now think to prompt.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                This makes complete sense though. We're simply trying to automate the human thinking process like we try to use technology to automate/handoff everything else.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                • CuriouslyC 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  This does not mesh with my personal experience. I find that AI reduces task noise that prevents me from getting in the flow of high level creative/strategic thinking. I can just plan algorithms/models/architectures and very quickly validate, test, iterate and always work at a high level while the AI handles syntax and arcane build processes.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  Maybe it's my natural ADHD tendencies, but having that implementation/process noise removed from my workflow has been transformational. I joke about having gone super saiyan, but it's for real. In the last month, I've gotten 3 papers in pre-print ready state, I'm working on a new model architecture that I'm about to test on ARC-AGI, and I've gotten ~20 projects to initial release or very close (several of which concretely advance SOTA).

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  • jugg1es 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    All of the nay-sayers in the comments here are thinking about this from the POV of a person who reached intellectual maturity without LLMs and now use it as a force multiplier, and rightly so.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    However, I think that take is too short-sighted and doesn't take into account the effect that these products have on minds that have not yet reached maturity. What happens when you've been using ChatGPT since grade school and have effectively offloaded all the hard stuff to AI through college? Those people won't be using it as a force multiplier - they will be using it to perform basic tasks. Ray-Ban sells glasses now with LLMs built in with a camera and microphone so you can constantly interact with it all day. What happens when everyone has one of these devices and use it for everything?

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • tuesdaynight 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      I believe that they will solve problems in different ways, just like we solve problems different from our ancestors because of the internet.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      • jugg1es a day ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        I wish I shared your optimism

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • j4hdufd8 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      "The Court did recognize that divesting Chrome and Android would have gone beyond the case’s focus on search distribution, and would have harmed consumers and our partners."

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      Absolute idiots

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      • stevenjgarner 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        This MIT study does not seem to address whether AI use causes true cognitive decline, or simply shifts the role of cognition from "doing the task" to "managing the task"?

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        • stevenjgarner 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          This study does not seem to address whether AI use causes true cognitive decline, or simply shifts the role of cognition from "doing the task" to "managing the task"?

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          • kelsey98765431 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            Misleading title, the article explicitly says when used to cheat on essays.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            • tsoukase 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              Cognitive decline in already grown up brains. Decline in intelligence in growing brains, so the reverse Flynn effect carry on for a few more years.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              • yayitswei 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                Management roles have always involved outsourcing cognitive work to subordinates. Are we seeing a cognitive decline there too? Maybe delegation was the original misalignment problem.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                • tqwhite 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  What a load of crap. I don't believe it for one second. Also, AI has only been an important influence for about twenty minutes.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  Here's what I think: AI causes you to forget how to program but causes you to learn how to plan.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  Also, AI enhances who you are. Dummies get dummer. Smarties get smarter.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  But that's not proven. It's anecdote. And I don't believe anyone knows what is really happening and those that claim to are counterproductive.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  • jugg1es 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    I think you are looking at this from a too-narrow lens. What happens when people have ChatGPT built into their eyeglasses and they use it for literally everything. Ray-Ban is already selling this as a product.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • tim333 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      I too am leaning towards the load of crap hypothesis.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • briandw 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      This is standard response to any new technology. Socrates call books the death of knowledge, in the 19th century there was a moral panic about girls reading novels etc etc.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      • bentt 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        I believe this just based on my experience. I've also noticed that the rewards I feel from programming are stolen, and there's this conflicting feeling of accomplishment without the process. It's maybe a bit like taking mind-altering drugs in that they create reward artificially.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        Much of what keeps me going with work is the reward loop. This changes it fundamentally and it's a bit frightening how compelling the actual productivity is, versus the psychological tradeoff of not getting the reward through the typical process of problem solving.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        • mensetmanusman 2 days ago
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          • ChrisArchitect 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            Paper from June.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            Discussion then: https://news.ycombinator.com/item?id=44286277

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            • shironandonon_ 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              aren’t those with higher intellect at greater risk of depression?

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              I’m going to use 2x the amount of AI that I was planning to use today.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              • WalterBright 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                The same thing with your body. Use a car instead of walking, and your body declines.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                • Kuinox 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  Remember, they only measured that the less time you spend on a task, the less you remember it.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  • nperez 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    I feel like this sort of thing will be referenced for comic relief in future talks about hysteria at the dawn of the AI era.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    The article actually contains the sentence "The machines aren’t just taking over our work—they’re taking over our minds." which reminds me more of Reefer Madness than an honest critique of modern tech.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • amelius 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      Isn't intelligence -> asking the right questions?

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      Rather than coming up with the right answers?

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      • tiborsaas 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        It's both and they form a feedback loop. You come up with a problem (question) and you solve the problem which might lead to more questions. So problem solving and reflecting back on it are both building blocks of intelligence.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      • FilosofumRex 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        This study is by Media Lab, which along with Sloan School, Econ, and newly minted Schwarzman College of Computing are not on par with the old school MIT!

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        Besides academics are bitter since LLMs are better at teaching than they are!

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        • rozab 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          This study itself and also the media coverage of it are shockingly bad. I wrote a bunch about it at the time and I don't really want to do that again but here is the low down:

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          - This is not a longitudinal study. Each partipant did 4 20 minute sessions. It just happens that the total study took 4 months. - The paper does not imply long term harm of any kind, they just measured brain connectivity during the short tasks. - It is not surprising that when asked to use an LLM to write an essay, partipants don't remember it. They didn't write it. - It is not surprising they showed less brain activity. They were delegating the task to something else. They were asked to. - I think the authors of the paper deliberately attempted to obscure this. Q7 on p30 is "LLM group: If you copied from ChatGPT, was it copy/pasted, or did you edit it afterwards?" This has been removed from the results section entirely, and other parts of the results do not match the supposed methodology. - The whole paper is extremely sloppy, with grammar mistakes, inconsistencies, and nonsensical charts. Check out Figure 29...

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          • darajava 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            I'm really not advocating for people to push out reams of AI drivel and not learn anything while doing it, but of these three groups which ones are likely to be the most effective?

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            The ability to easily edit in word processors surely atrophied people's ability to really reason out what they wanted to write before committing it to paper. Is it sad that these traits are less readily available in the human populace? Sure. Do we still use word processors anyway because of the tremendous benefits they have? Of course. Similar could be said for spellcheckers, tractors, calculators, power tools, etc.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            With LLMs, it's so much quicker to access a tremendous breadth of information, as well as drill down and get a pretty good depth on a lot of things too. We lose some things by doing it this way, and it can certainly be very misused (usually in a fairly embarrassing way). We need to keep it human, but AI is here to stay and I think the benefits far exceed the "cognitive decline" as mentioned in this journal.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            • nzach 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              > 0% of LLM users could produce a correct quote, while most Brain-only and Search users could

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              I think a better interpretation would be to say that LLMs gives people the ability to "filter out" certain tasks in our brains. Maybe a good parallel would be to point out that some drivers are able to drive long distances on what is essentially an "auto-pilot". When this happens they are able to drive correctly but don't really register every single action they've taken during the process.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              In this study you are asking for information that is irrelevant (to the participant). So, I think it is expected that people would filter it out if given the chance.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              [edit] Forgot to link the related xkcd: https://xkcd.com/1414/

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              • davidclark 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                I think the “crushing nihilism” pro-AI argument is what makes me most depressed. We are going to have so much fun when we do not communicate with other humans because it is a task that we can easily “filter out.”

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              • hopelite 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                What a rather ironic headline that generalizes across all "AI use", while the story is about a study that is specifically about "essay writing tasks". But that kind of slop is just par for the course for journalists and also always has been.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                But it does highlight that this mind-slop decline is not new in any way even if it may have accelerated with the decline and erosion of standards.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                Think of it what you want, but if the standards that led to a state everyone really enjoys and benefits from are done away with, inevitably that enjoyable state everyone benefited from and you really like will start crumbling all around you.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                AI is not really unusual in this manner, other than maybe that it is squarely hitting a group and population like public health policy journalists and programmers that previously thought they were immune because they were engaged in writing. Yes, programmers are essentially just writers.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                • MarkusWandel 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  Muscles atrophy from lack of use - as an aging cyclist with increasing numbers of e-bikes all around, I think I may some day have to use one because of age, but what are all these younger people doing, cheating themselves out of exercise?

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  And so it is with many things. I wrote cursive right through the end of my high school years, but while I can type well on a computer, I have trouble even writing block lettering without mistakes now, and cursive is a lost cause.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  Ubiquitous electronic calculators have eroded the heroic mental calculation skills of old. And now artificial "thinking machines" to do the thinking for you cause your brain to atrophy. Colour me surprised. The Whispering Earring story was mentioned here just recently but is totally topical.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  https://croissanthology.com/earring

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  • asimovfan 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    Writing long texts for school is stupid and it is a skill that is in practice purely developed in order to do homework. I am not surprised it immediately declines as soon as the necessity is removed.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • patrickmay 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      On the contrary, writing is key to organizing and clarifying one's thoughts. It is an essential part of learning.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      "Writing is nature’s way of letting you know how sloppy your thinking is." -- Guindon

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      • asimovfan 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        People write a lot of stuff that is not for homework. Maybe they should make a measurement of something else they write. I would even say that writing for homework is a special skill in bullshitting that does not (cannot) exist in other forms of writing.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      • LMKIIW 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        > ...is a skill that is in practice purely developed in order to do homework.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        I would argue that it helps kids learn how to organize and formulate coherent thoughts and communicate with others. I'm sure it helps them do homework, too.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        • miltonlost 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          An Asimov fan saying writing long texts is stupid? I bet he would have some strong feelings against that

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          • asimovfan 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            An asimov novel is not a homework, i explicitly referred to homeworks. People write a lot of stuff other than homework.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            • krapp 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              Well yeah, he was probably getting paid by the word :)

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          • timhigins 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            This was published by an anti-vaxxer/vaccine denier and also seems to be AI-generated. Would recommend linking to the original study instead. The homepage of the site includes articles like "Do Viruses Exist?" "(POLL) 96% support federal control of DC to fight crime" "Autism Spectrum Disorders: Is Immunoexcitotoxicity the Link to the Vaccine Adjuvants? The Evidence" and so does his twitter page: https://x.com/NicHulscher.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            • lowbloodsugar 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              I mean, I felt the same way about people who built things with Visual Basic instead of C or assembly, back in the day. Then there were super smart people who were doing critical things in C/C++ and using VB to make a nice UI.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              AI is no different. Most will use it and not learn the fundamentals. There’s still lots of work for those people. Then some of us are doing things like looking at the state machines that rust async code generation produces or inspecting what the Java JIT is producing and still others are hacking ARM assembly. I use AI to take care of the boring bits, just like writing a nice UI in C++ was tedious back in 1990 so we used VB for that.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              • arzig 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                Honestly the only use I’ve found for AI so far is for executing refactorings that are mechanical but don’t fit nicely into the rename/move or multi-cursor mode.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                I’ll do it once or twice, tell the llm to do it and reference the changes I made and it’s usually passable. It’s not fit for anything more imo.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                • footy 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  there's going to be an avalanche of dementia for the generations that outsource all their thinking to LLMs

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  • johnisgood 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    IMO that is a misuse of LLMs. You are not supposed to outsource your thinking. You need to be part of the whole process, incl. the architectural design. I am, and I got far with LLMs (Claude mostly, not much with GPT). I use GPT for personal stuff or ramblings, not for coding.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    There will always be people who misuse something, but we should not hurt those who do not. Same with drugs. There are functional junkies who know when to stop, go on a tolerance break, take just enough of a dose and so forth, vs. the irresponsible ones. The situation is quite similar and I do not want AI to be "banned" (assuming it could) because of people who misuse LLMs.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    People, let us have nice things.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    As for the article... did they not say the same thing about search engines and Wikipedia? Do you remember how cheating actually helps us learn (by writing down the things you want to cheat)? Problem is, people do not even bother reading the output of the LLM and that is on them.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • jajko 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      Misuse or not, who cares about labeling.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      Internet was supposed to be this wonderful free place with all information available and unbiased, not the cesspool of scams and tracking that makes 1984 look like a fairytale for children. Atomic energy was supposed to free mankind from everlasting struggle for energy dependency, end wars and whatnot. LLMs we supposed to be X and not Y and used as Z and not BBCCD.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      For what population loses overall, compared to whats gained (really, what? a mild increased efficiency sometimes experienced on individual level, sometimes made up for PR), I consider these LLMs are a net loss for whole mankind.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      Above should tell you something about human nature, how naive some of the brightest of us are.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      • johnisgood 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        It works for me, so I would rather not have it taken away from me. Take it away from people who misuse it.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        If it is a human nature issue (with which I agree), then we are in a deep shit and this is why we cannot have nice things.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        Educate, and if that fails, then punish those who "misuse" it. I do not have a better idea. It works for me quite well for coding, and it will continue to work as long as it is not going to get nerfed.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        • jajko 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          Nobody is taking it away from you, but as we seem to agree that ship has sailed for some deep waters, nobody is backpedaling back.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          Well cheers to even bigger gap between elite who can afford good education and upbringing and cheap crappy rest. Number of scifi novels come to mind where poor semi-mindless masses are governed by 'educated' elites. I always thought how such society must have screwed up badly in the past to end up like that. Nope, road to hell is indeed paved with good intentions and small little steps which seem innocent or even beneficial on their own, in their time.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          • johnisgood 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            It is just crazy that people still believe in the "think of the children" narratives, or "it is for your own safety". I think these seemingly good intentions (which are not actually good intentions, just seem so) are a huge problem, and lack of resistance because if you resist, their rebuttal is "you don't want our kids to be safe?!" and so forth, appealing to emotions and shame.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      • footy 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        sure, we may call that a misuse. But there are already people using them this way, and they're marketed this way, and I was not making a point about the correctness of using them this way---just observing that this is going to have far-reaching consequences.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        • johnisgood 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          I know, and it is huge problem that people use it this way, and that it is marketed this way.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • tim333 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      I'm not at all convinced that "AI Use Reprograms the Brain, Leading to Cognitive Decline"

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      Some of the points that LLM users could remember what they wrote and felt disconnected from it are kind of well, duh. Obviously that applies to anything written by someone or something else. If that's the level of argument I very much doubt it supports the LLM leads to cognitive decline hypothesis.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      I mean you won't learn as much having an LLM write and essay than writing it yourself, but you can use LLMs and write essays or whatever. I doubt LLMs are any worse for your head than daytime TV or such like.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      • rogerkirkness 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        This article is written by AI. The em dashes and 'Don't just X, but Y' logic is a classic ChatGPT writing pattern in particular.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        • Kuinox 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          The em dashes exists in ChatGPT output because existing human text contains it, like journal articles.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        • gowld 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          This research is based on people being given 20 minutes to research and write an "essay"? (Or, in the Brain only case, write an "essay" without doing any research.)

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          How is that not utter garbage? You're comparing text that is barely more than a forum comment, and noticing that people who spend the short time thinking and writing are engaging in different activity from people who spend the time using a research tools and different activity from people whow spend the time asking an AI (and waiting for it) to generate content.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          • agigao 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            Skill atrophy will be a composition of 2 words that will very much define the tech industry in 2025.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            And, it is something we need to talk about loudly, but I guess it wouldn't crank up a number of followers or valuation of AI grifters.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            • lawlessone 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              Counterpoint: I just asked chatgpt and it says i'm the smrtest boy and very handsome

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              • m3kw9 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                Is calculator/excel a bad thing? I'm ok with not having fast calculation(cognitive decline in that area) as it frees me to do other things that crops up as a result of the speed.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                • m3kw9 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  For those not critical of what AI says, this will be a bigger issue, they will just by pass their own decision making and paste AI response, likely atrophying that thought process

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  • plutoh28 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    Is it just me or does this paper read like it was run through ChatGPT? Kind of ironic if true.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • iphone_elegance 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      well now that explains HN

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      • feverzsj 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        "@gork Is this true?"

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        • ETH_start 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          When I'm really using AI, my mind is pushed to its very limits. I'm forced to maintain context that is much more complex than anything I had to keep in working memory pre-AI. But it also feels easier because you don't have to do nearly as much thinking to get every given task done. So maybe I get lazier, not in how much I accomplish, but in how much effort I put forth. So if my previous working intensity applied with AI would let me finish 10x as much work, now I'm content with exerting half as much effort and getting 5x as much work done as my pre-AI self.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          • ath3nd 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            That explains a lot of Hacker News lately. /s

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            Like everything else in our life, cognition is "use it or lose it". Oursourcing your decision making and critical thinking to a fancy autocomplete with sycopantic tendencies and incapable of reasoning sure is fun, but as the study found, it has its downsides.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            • kibwen 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              To be fair, a lot of commenters on HN were demonstrably suffering the effects of cognitive decline for years before LLMs.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              • AnimalMuppet 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                Not totally sure that's /s.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                Over the last three years or so, I have seen more and more posts where the position just doesn't make sense. I mean, ten years ago, there were posts on HN that I disagreed with that I upvoted anyway, because they made me think. That has become much more rare. An increasing number of posts now are just... weird (I don't know a better word for it). Not thoughtful, not interesting (even if wrong), just weird.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                I can't prove that any of them are AI-generated. But I suspect that at least some of them are.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              • quotemstr 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                "Studies" like this bite at the ankles of every change in information technology. Victorians thought women reading too many magazines would rot their minds.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                Given that AI is literally just words on a monitor just like the rest of the internet, I have a strong prior it's not "reprogram[ming]" anyone's mind, at least not in some manner that, e.g. heavy Reddit use might.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                • stego-tech 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  That’s a pretty spicy take for first thing in the morning. The confidence with which you assert a repeatedly proven facile argument is…unenviable. “Fractal wrongness,” I’ve seen it called.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  We have decades of research - brain scans, studies, experiments, imaging, stimuli responses, etc - proving that when a human no longer has to think about performing a skill, that skill immediately begins to atrophy and the brain adapts accordingly. It’s why line workers at McDonalds don’t actually learn how to properly cook food (it’s all been procedured-out and automated where possible to eliminate the need for critical thinking skills, thus lowering the quality of labor needed to function), and it’s why - at present - we’re effectively training a cohort of humans who lack critical thinking and reasoning skills because “that’s what the AI is for”.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  This is something I’ve known about long before the current LLM craze, and it’s why I’ve always been wary or hostile to “aggressively helpful” tools like some implementations of autocorrect, or some driving aides: I am not just trying to do a thing quickly, I am trying to do it well, and that requires repeatedly practicing a skill in order to improve.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  Studies like these continue to support my anxiety that we’re dumbing down the best technical generation ever into little more than agent managers and prompt engineers who can’t solve their own problems anymore without subscribing to an AI service.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  • quotemstr 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    Learning and habit formation are not "reprogramming". If you define "reprogramming" as anything that updates neuron weights, the term encompasses all of life and becomes useless.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    My point is that I don't see LLM's effect on the brain as being anything more than the normal experience we have of living and that the level of drama the headline suggests is unwarranted. I don't believe in infohazards.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    Might they result in skill atrophy? For sure! But it's the same kind of atrophy we saw when, e.g. transitioning from paper maps to digital ones, or from memorizing phone numbers to handing out email addresses. We apply the neurons we save by no longer learning paper map navigation and such to other domains of life.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    The process has been ongoing since homo erectus figured out that if you bang a rock hard enough, you get a knife. So what?

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • AnimalMuppet 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      So what is, the skill in question is thinking critically. Letting that atrophy is kind of a bigger deal than if our paper map reading skills atrophy.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      Now, you could argue that, when we use AI, critical thinking skills are more important, because we have to check the output of a tool that is quite prone to error. But in actual use, many people won't do that. We'll be back at "Computers Do Not Lie" (look for the song on Youtube if you're not familiar with it), only with a much higher error rate.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  • flanked-evergl 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    VS Code copilot has reprogrammed my mind to the point where not using it is just not worth it. It actually seldomly helps me do difficult things, it often helps me do incredibly mundane things, and if I have to go back to doing those incredibly mundane things by hand I would rather become a gardener.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    • AnimalMuppet 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      > "Studies" like this bite at the ankles of every change in information technology. Victorians thought women reading too many magazines would rot their minds.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      If the Victorians had scientific studies showing that, you might have a point. Instead, you just have a flawed analogy.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      And, why the scare quotes? If you can point to some actual flaws in the study, do so. If not, you're just dismissing a study that you don't agree with, but you have no actual basis for doing so. Whereas the study does give us a basis for accepting its conclusions.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      • quotemstr 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        > And, why the scare quotes?

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        N=54, students and academics only (mostly undergrad), impossible to blind, and, worst of all, the conclusion of the study supports a certain kind of anti-technology moralizing want to do anyway. I'd be shocked if it replicated, and even if it did, it wouldn't mean much concretely.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        You could run the same experiment comparing paper maps versus Google Maps in a simulated navigation scenario. I'd bet the paper map group would score higher on various comprehension metrics. So what? Does that make digital maps bad for us? That's the implication of the article, and I don't think the inference is warranted.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      • ath3nd 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        If it wasn't for studies like this, you'd still think arsenic is a great way to produce a vibrant green color to paint your house in the color of nature!

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        Because of studies like this we know the burning of fossil fuels is a dead-end for us and our climate, and due to that have developed alternative methods of generating energy.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        And the study actually proved that LLM usage reprograms your brain and makes you a dumbass. Social media usage does as well, those two things are not exclusive, if anything, their effects compound on an already pretty dumb and gullible population. So if your argumemt is 'but what about reddit', thats a non argument called 'whataboutism'. Look it up and hopefully it might give you a hint why you are getting downvoted.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        There have been three recent studies showing that:

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        - 1. 95% LLM projects fail in the enterprise https://fortune.com/2025/08/18/mit-report-95-percent-generat...

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        - 2. Experienced developers get 19% less productive when using an LLM https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        - 3. LLM usage makes you dumber https://publichealthpolicyjournal.com/mit-study-finds-artifi...

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        We reached a stage where people on the internet mistake their opinion on a subject to be as relevant as a study on the subject.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        If you don't have another study or haven't done the science to disprove this study, how come you dismiss so easily a study that actually took time, data and the scientific method to reach to a conclusion? I feel we gotta actively and firmly call out that kind of behavior and ridicule it.

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        • planetmcd 2 days ago

                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          1. Wait, in a category where the general failure rate is traditionally 75%, using a bleeding edge technology adds 20% more risk, what a shock. 2. This is an interesting study, but perhaps limited. Draws conclusions from a set of 16 developers on very large projects, many of whom did not have previous experience with the editor used in the study or LLMs in general. The study did conclude it added time in these cases. There is a reason for the large sense of value, this would be the thing of note to uncover based on these results. Study notes 79% continued to use the AI tools. Speed is not the only value to be gained, but it was the only value measured. (Study notes this.) 3. Author didn't read or used AI to poorly summarize the poorly thought out study it is based on. Also, it seems you didn't read the study.