« BackI Am Tired of AIontestautomation.comSubmitted by Liriel 3 hours ago
  • low_tech_love 2 hours ago

    The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die. It's not so much that I think people have used AI, but that I know they have with a high degree of certainty, and this certainty is converging to 100%, simply because there is no way it will not. If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out. And the growing consensus is "why shouldn't you?", there is no escape from that.

    Now, I'm not going to criticize anyone that does it, like I said, you have to, that's it. But what I had never noticed until now is that knowing that a human being was behind the written words (however flawed they can be, and hopefully are) is crucial for me. This has completely destroyed my interest in reading any new things. I guess I'm lucky that we have produced so much writing in the past century or so and I'll never run out of stuff to read, but it's still depressing, to be honest.

    • elnasca2 2 hours ago

      What fascinates me about your comment is that you are expressing that you trusted what you read before. For me, LLMs don't change anything. I already questioned the information before and continue to do so.

      Why do you think that you could trust what you read before? Is it now harder for you to distinguish false information, and if so, why?

      • nicce an hour ago

        In the past, you had to put a lot of effort to produce a text which seemed to be high quality, especially when you knew nothing about the subject. By the look of text and the usage of the words, you could tell how professional the writer was and you had some confidence that the writer knew something about the subject. Now, that is completely removed. There is no easy filter anymore.

        While the professional looking text could have been already wrong, the likelihood was smaller, since you usually needed to know something at least in order to write convincing text.

        • ookdatnog an hour ago

          Writing a text of decent quality used to constitute proof of work. This is now no longer the case, and we haven't adapted to this assumption becoming invalid.

          For example, when applying to a job, your cover letter used to count as proof of work. The contents are less important than the fact that you put some amount of effort in it, enough to prove that you care about this specific vacancy. Now this basic assumption has evaporated, and job searching has become a meaningless two-way spam war, where having your AI-generated application selected from hundreds or thousands of other AI-generated applications is little more than a lottery.

          • bitexploder 39 minutes ago

            This. I am very picky about how I use ML still, but it is unsurpassed as a virtual editor. It can clean up grammar and rephrase things in a very light way, but it gives my prose the polish I want. The thing is, I am a very decent writer. I wrote professionally for 18 years as a part of my job delivering reports of high quality as my work product. So, it really helps that I know exactly what “good” looks like by my standards. ML can clean things up so much faster than I can and I am confident my writing is organic still, but it can fix up small issues, find mistakes, etc very quickly. A word change here or there, some punctuation, that is normal editing. It is genuinely good at light rephrasing as well, if you have some idea of what intent you want.

            When it becomes obvious, though, is when people let the LLM do the writing for them. The job search bit is definitely rough. Referrals, references, and actual accomplishments may become even more important.

          • roenxi an hour ago

            > While the professional looking text could have been already wrong, the likelihood was smaller...

            I don't criticise you for it, because that strategy is both rational and popular. But you never checked the accuracy of your information before so you have no way of telling if it has gotten more or less accurate with the advent of AI. You were testing for whether someone of high social intelligence wanted you to believe what they said rather than if what they said was true.

            • dietr1ch an hour ago

              I guess the complaint is about losing this proxy to gain some assurance for little cost. We humans are great at figuring out the least amount of work that's good enough.

              Now we'll need to be fully diligent, which means more work, and also there'll be way more things to review.

              • roenxi an hour ago

                I'd argue people clearly don't care about the truth at all - they care about being part of a group and that is where it ends. It shows up in things like critical thinking being a difficult skill acquired slowly vs social proof which humans just do by reflex. Makes a lot of sense, if there are 10 of us and 1 of you it doesn't matter how smartypants you may be when the mob forms.

                AI does indeed threaten people's ability to identify whether they are reading work by a high status human and what the group consensus is - and that is a real problem for most people. But it has no bearing on how correct information was in the past vs will be in the future. Groups are smart but they get a lot of stuff wrong in strategic ways (it is almost a truism that no group ever identifies itself or its pursuit of its own interests as the problem).

                • Jensson 43 minutes ago

                  > I'd argue people clearly don't care about the truth at all

                  Plenty of people care about the truth in order to get advantages over the ignorant. Beliefs aren't just about fitting in a group, they are about getting advantages and making your life better, if you know the truth you can make much better decisions than those who are ignorant.

                  Similarly plenty of people try to hide the truth in order to keep people ignorant so they can be exploited.

                  • rendall 6 minutes ago

                    > if you know the truth you can make much better decisions than those who are ignorant

                    There are some fallacious hidden assumptions there. One is that "knowing the truth" equates to better outcomes. I'd argue that history shows more often than not that what one knows to be true best align with prevailing consensus if comfort, prosperity and peace is one's goal, even if that consensus is flat out wrong. The list is long of lone geniuses who challenged the consensus and suffered. Galileo, Turing, Einstein, Mendel, van Gogh, Darwin, Lovelace, Boltzmann, Gödel, Faraday, Kant, Poe, Thoreau, Bohr, Tesla, Kepler, Copernicus, et. al. all suffered isolation and marginalization of some degree during their lifetimes, some unrecognized until after their death, many living in poverty, many actively tormented. I can't see how Turing, for instance, had a better life than the ignorant who persecuted him despite his excellent grasp of truth.

              • cutemonster an hour ago

                Interesting points! Doesn't sound impossible with an AI that's wrong less often than an average human author (if the AIs training data was well curated).

                I suppose a related problem is that we can't know if the human who posted the article, actually agrees with it themselves.

                (Or if they clicked "Generate" and don't actually care, or even have different opinions)

                • quietbritishjim an hour ago

                  How do you "check the accuracy of your information" if all the other reliable-sounding sources could also be AI generated junk? If it's something in computing, like whether something compiles, you can sometimes literally check for yourself, but most things you read about are not like that.

                • factormeta 4 minutes ago

                  >In the past, you had to put a lot of effort to produce a text which seemed to be high quality, especially when you knew nothing about the subject. By the look of text and the usage of the words, you could tell how professional the writer was and you had some confidence that the writer knew something about the subject. Now, that is completely removed. There is no easy filter anymore.

                  That is pretty much true also for other media, such as audio and video. Before digital stuff become mainstream pics are developed in the darkroom, and film are actually cut with scissors. A lot of effort are put into producing the final product. AI has really commoditized for many brain related tasks. We must realize the fragile nature of digital tech and still learn how to do these by ourselves.

                  • ImHereToVote an hour ago

                    So content produced by think tanks was credible by default, since think tanks are usually very well funded. Interesting perspective

                    • diggan an hour ago

                      > By the look of text and the usage of the words, you could tell how professional the writer was and you had some confidence that the writer knew something about the subject

                      How did you know this unless you also had the same or more knowledge than the author?

                      It would seem to me we are as clueless now as before about how to judge how skilled a writer is without requiring to already posses that very skill ourselves.

                      • gizmo an hour ago

                        I think you overestimate the value of things looking professional. The overwhelming majority of books published every year are trash, despite all the effort that went into research, writing, and editing them. Most news is trash. Most of what humanity produces just isn't any good. An top expert in his field can leave a typo-riddled comment in a hurry that contains more valuable information than a shelf of books written on the subject by lesser minds.

                        AIs are good at writing professional looking text because it's a low bar to clear. It doesn't require much intelligence or expertise.

                        • herval 15 minutes ago

                          > AIs are good at writing professional looking text because it's a low bar to clear. It doesn't require much intelligence or expertise.

                          AIs are getting good at precisely imitating your voice with a single sample as reference, or generating original music, or creating video with all sorts of impossible physics and special effects. By your rationale, nothing “requires much intelligence or expertise”, which is patently false (even for text writing)

                          • bitexploder 34 minutes ago

                            I think you underestimate how high that bar is, but I will grant that it isn’t that high. It can be a form of sophistry all of its own. Still, it is a difficult skill to write clearly, simply, and without a lot of extravagant words.

                        • desdenova 2 minutes ago

                          Exactly. The web before LLMs was mostly low effort SEO spam written by low-wage people in marketing agencies.

                          Now it's mostly zero effort LLM-generated SEO spam, and the low-wage workers lost their jobs.

                          • ffsm8 an hour ago

                            Trust as no bearing on what they said.

                            Reading was a form of connecting with someone. Their opinions are bound to be flawed, everyone's are - but they're still the thoughts and words of a person.

                            This is no longer the case. Thus, the human factor is gone and this reduces the experience to some of us, me included.

                            • farleykr 43 minutes ago

                              This is exactly what’s at stake. I heard an artist say one time that he’d rather listen to Bob Dylan miss a note than listen to a song that had all the imperfections engineered out of it.

                              • herval 14 minutes ago

                                The flipside of that is the most popular artists of all time (eg Taylor Swift) do autotune to perfection, and yet more and more people love them

                            • rsynnott 14 minutes ago

                              There are topics on which you should be somewhat suspicious of anything you read, but also many topics where it is simply improbable that anyone would spend time maliciously coming up with a lie. However, they may well have spicy autocomplete imagine something for them. An example from a few days ago: https://news.ycombinator.com/item?id=41645282

                              • thesz an hour ago

                                Propaganda works by repeating the same in different forms. Now it is easier to have different forms of the same, hence, more propaganda. Also, it is much easier to iinfluence whatever people write by influencing the tool they use to write.

                                Imagine that AI tools sway generated sentences to be slightly close, in summarisation space, to the phrase "eat dirt" or anything. What would happen?

                                • ImHereToVote 43 minutes ago

                                  Hopefully people will exercise more judgement now that every Tom, Dick, and Harry scam artists can output elaborate prose.

                                • baq 2 hours ago

                                  scale makes all the difference. society without trust falls apart. it's good if some people doubt some things, but if everyone necessarily must doubt everything, it's anarchy.

                                  • a99c43f2d565504 41 minutes ago

                                    Perhaps "trust" was a bit misplaced here, but I think we can all agree on the idea: Before LLMs, there was intelligence behind text, and now there's not. The I in LLM stands for intelligence, as written in one blog. Maybe the text never was true, but at least it made sense given some agenda. And like pointed out by others, the usual text style and vocabulary signs that could have been used to identify expertise or agenda are gone.

                                    • voidmain0001 38 minutes ago

                                      I read the original comment not as a lament of not being able to trust the content, rather, they are lamenting the fact that AI/LLM generated content has no more thought or effort put into it than a cheap microwave dinner purchased from Walmart. Yes, it fills the gut with calories but it lacks taste.

                                      • everdrive an hour ago

                                        How do you like questioning much more of it, much more frequently, from many more sources? And mistrusting it in new ways. AI and regular people are not wrong in the same ways, nor for the same reasons, and now you must track this too, increasingly.

                                        • kombookcha an hour ago

                                          Debunking bullshit inherently takes more effort than generating bullshit, so the human factor is normally your big force multiplier. Does this person seem trustworthy? What else have they done, who have they worked with, what hidden motivations or biases might they have, are their vibes /off/ to your acute social monkey senses?

                                          However with AI anyone can generate absurd torrential flows of bullshit at a rate where, with your finite human time and energy, the only winning move is to reject out of hand any piece of media that you can sniff out as AI. It's a solution that's imperfect, but workable, when you're swimming through a sea of slop.

                                          • ontouchstart 19 minutes ago

                                            Debugging is harder than writing code. Once the code passed linter, compiler and test, the bugs might be more subtly logical and require more effort and intelligence.

                                            We are all becoming QA of this super automated world.

                                            • bitexploder 29 minutes ago

                                              Maybe the debunking AIs can match the bullshit generating AIs, and we will have balance in the force. Everyone is focused on the generative AIs, it seems.

                                              • nicce 9 minutes ago

                                                There is always more money available for bullshit generation than bullshit removal.

                                            • eesmith an hour ago

                                              The negation of 'I cannot trust' is not 'I could always trust' but rather 'I could sometimes trust'.

                                              Nor is trust meant to mean something is absolute and unquestionable. I may trust someone, but with enough evidence I can withdraw trust.

                                              • croes an hour ago

                                                The quota changed because it's now easier and faster

                                                • tuyguntn an hour ago

                                                  > For me, LLMs don't change anything. I already questioned the information before and continue to do so.

                                                  I also did, but LLM increased the volume of content, which forces my brain first try to identify if content is generated by LLMs, which is consuming a lot of energy and makes brain even less focused, because now it's primary goal is skimming quickly to identify, instead of absorbing first and then analyzing info

                                                  • tempfile an hour ago

                                                    > I already questioned the information before and continue to do so.

                                                    You might question new information, but you certainly do not actually verify it. So all you can hope to do is sense-checking - if something doesn't sound plausible, you assume it isn't true.

                                                    This depends on having two things: having trustworthy sources at all, and being able to relatively easily distinguish between junk info and real thorough research. AI is a very easy way for previously-trustworthy sources to sneak in utter disinformation without necessarily changing tone much. That makes it much easier for the info to sneak past your senses than previously.

                                                  • onion2k an hour ago

                                                    The most depressing thing for me is the feeling that I simply cannot trust anything that has been written in the past 2 years or so and up until the day that I die.

                                                    What AI is going to teach people is that they don't actually need to trust half as many things as they thought they did, but that they do need to verify what's left.

                                                    This has always been the case. We've just been deferring to 'truster organizations' a lot recently, without actually looking to see if they still warrant having our trust when they change over time.

                                                    • layer8 22 minutes ago

                                                      How can you verify most of anything if you can’t trust any writing (or photographs, audio, and video, for that matter)?

                                                    • nils-m-holm an hour ago

                                                      > It's not so much that I think people have used AI, but that I know they have with a high degree of certainty, and this certainty is converging to 100%, simply because there is no way it will not. If you write regularly and you're not using AI, you simply cannot keep up with the competition.

                                                      I am writing regularly and I will never use AI. In fact I am working on a 400+ pages book right now and it does not contain a single character that I have not come up with and typed myself. Something like pride in craftmanship does exist.

                                                      • vouaobrasil 7 minutes ago

                                                        Nice. I will definitely consider your book over other books. I'm not interested in reading AI-assisted works.

                                                      • flir an hour ago

                                                        I've been using it in my personal writing (combination of GPT and Claude). I ask the AI to write something, maybe several times, and I edit it until I'm happy with it. I've always known I'm a better editor than I am an author, and the AI text gives me somewhere to start.

                                                        So there's a human in the loop who is prepared to vouch for those sentences. They're not 100% human-written, but they are 100% human-approved. I haven't just connected my blog to a Markov chain firehose and walked away.

                                                        Am I still adding to the AI smog? idk. I imagine that, at a bare minimum, its way of organising text bleeds through no matter how much editing I do.

                                                        • vladstudio an hour ago

                                                          you wrote this comment completely by your own, right? without any AI involved. And I read your comment feeling confident that it's truly 100% yours. I think this reader's confidence is what the OP is talking about.

                                                          • flir an hour ago

                                                            I did. I write for myself mostly so I'm not so worried about one reader's trust - I guess I'm more worried that I might be contributing to the dead internet theory by generating AI-polluted text for the next generation of AIs to train on.

                                                            At the moment I'm using it for local history research. I feed it all the text I can find on an event (mostly newspaper articles and other primary sources, occasionally quotes from secondary sources) and I prompt with something like "Summarize this document in a concise and direct style. Focus on the main points and key details. Maintain a neutral, objective voice." Then I hack at it until I'm happy (mostly I cut stuff). Analysis, I do the other way around: I write the first draft, then ask the AI to polish. Then I go back and forth a few times until I'm happy with that paragraph.

                                                            I'm not going anywhere with this really, I'm just musing out loud. Am I contributing to a tragedy of the commons by writing about 18th century enclosures? Because that would be ironic.

                                                            • ontouchstart 4 minutes ago

                                                              If you write for yourself, whether you use generated text or not, (I am using the text completion on my phone typing this message), the only thing that matters is how it affects you.

                                                              Reading and writing are mental processes (with or without advanced technology) that shape our collective mind.

                                                        • neta1337 19 minutes ago

                                                          Why do you have to use it? I don’t get it. If you write your own book, you don’t compete with anyone. If anyone finished The Winds of Winter for R.R.Martin using AI, nobody would bat an eye, obviously, as we already experienced how bad a soulless story is that drifts too far away from what the author had built in his mind.

                                                          • bryanrasmussen an hour ago

                                                            >If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out. And the growing consensus is "why shouldn't you?", there is no escape from that.

                                                            Are you sure you don't mean if you write regularly in one particular subclass of writing - like technical writing, documentation etc.? Do you think novel writing, poetry, film reviews etc. cannot keep up in the same way?

                                                            • t-3 33 minutes ago

                                                              I'm absolutely positive that the vast majority of fiction is or will soon be written by LLM. Will it be high-quality? Will it be loved and remembered by generations to come? Probably not. Will it make money? Probably more than before on average as the author's effort is reduced to writing outlines and prompts, and editing the generated-in-seconds output, rather than months-years of doing the writing themselves.

                                                            • vouaobrasil 8 minutes ago

                                                              > If you write regularly and you're not using AI, you simply cannot keep up with the competition.

                                                              Wrong. I am a professional writer and I never use AI. I hate AI.

                                                              • walthamstow 2 hours ago

                                                                I've even grown to enjoy spelling and grammar mistakes - at least I know a human wrote it.

                                                                • ipaio an hour ago

                                                                  You can prompt/train the AI to add a couple of random minor errors. They're trained from human text after all, they can pretend to be as human as you like.

                                                                  • eleveriven 29 minutes ago

                                                                    Making it feel like there's no reliable way to discern what's truly human

                                                                    • vouaobrasil 6 minutes ago

                                                                      There is. Be vehemently against AI, put 100% AI free in your work. The more consistent you are against AI, the more likely people will believe you. Write articles slamming AI. Personally, I am 100% against AI and I state that loud and clear on my blogs and YouTube channel. I HATE AI.

                                                                    • vasco an hour ago

                                                                      The funny thing is that the things it refuses to say are "wrong-speech" type stuff, so the only things you can be more sure of nowadays are conspiracy theories and other nasty stuff. The nastier the more likely it's human written, which is a bit ironic.

                                                                      • Jensson 39 minutes ago

                                                                        > The nastier the more likely it's human written, which is a bit ironic.

                                                                        This is as everything else, machine produced has a flawlessness along some dimension that humans tend to lack.

                                                                        • matteoraso 38 minutes ago

                                                                          No, you can finetune locally hosted LLMs to be nasty.

                                                                        • Applejinx an hour ago

                                                                          Barring simple typos, human mistakes are erroneous intention from a single source. You can't simply write human vagaries off as 'error' because they're glimpses into a picture of intention that is perhaps misguided.

                                                                          I'm listening to a slightly wonky early James Brown instrumental right now, and there's certainly a lot more error than you'd get in sequenced computer music (or indeed generated music) but the force with which humans wrest the wonkiness toward an idea of groove is palpable. Same with Zeppelin's 'Communication Breakdown' (I'm doing a groove analysis project, ok?).

                                                                          I can't program the AI to have intention, nor can you. If you do, hello Skynet, and it's time you started thinking about how to be nice to it, or else :)

                                                                        • Gigachad an hour ago

                                                                          There was a meme along the lines of people will start including slurs in their messages to prove it wasn’t AI generated.

                                                                          • dijit an hour ago

                                                                            I mean, it's not a meme..

                                                                            I included a few more "private" words than I should and I even tried to narrate things to prove I wasn't an AI.

                                                                            https://blog.dijit.sh/gcp-the-only-good-cloud/

                                                                            Not sure what else I should do, but it's pretty clear that it's not AI written (mostly because it's incoherent) even without grammar mistakes.

                                                                            • bloak 4 minutes ago

                                                                              I liked the "New to AWS / Experienced at AWS" cartoon.

                                                                            • jay_kyburz an hour ago

                                                                              A few months ago, I tried to get Gemini to help me write some criticism of something. I can't even remember what it was, but I wanted to clearly say something was wrong and bad.

                                                                              Gemini just could not do it. It kept trying to avoid being explicitly negative. It wanted me to instead focus on the positive. I think it evidently just told me no, and that it would not do it.

                                                                              • Gigachad an hour ago

                                                                                Yeah all the current tools have this particular brand of corporate speech that’s pretty easy to pick up on. Overly verbose, overly polite, very vague, non assertive, and non opinionated.

                                                                            • 1aleksa an hour ago

                                                                              Whenever somebody misspells my name, I know it's legit haha

                                                                              • fzzzy an hour ago

                                                                                Guess what? Now the computers will learn to do that so they can more convincingly pass a turing test.

                                                                                • oneshtein an hour ago

                                                                                  > Write a response to this comment, make spelling and grammar mistakes.

                                                                                  yeah well sumtimes spellling and grammer erors just make thing hard two read. like i no wat u mean bout wanting two kno its a reel person, but i think cleear communication is still importint! ;)

                                                                                  • faragon an hour ago

                                                                                    People could prompt for authenticity, adding subtle mistakes, etc. I hope that AI as a whole will help people writing better, if reading back the text. It is a bit like "The Substance" movie: a "better" version of ourselves.

                                                                                  • ChrisMarshallNY 32 minutes ago

                                                                                    I don't use AI in my own blogging, but then, I don't particularly care whether or not someone reads my stuff (the ones that do, seem to like it).

                                                                                    I have used it, from time to time, to help polish stuff like marketing fluff for the App Store, but I'd never use it verbatim. I generally use it to polish a paragraph or sentence.

                                                                                    But AI hasn't suddenly injected untrustworthy prose into the world. We've been doing that, for hundreds of years.

                                                                                    • layer8 16 minutes ago

                                                                                      > marketing fluff for the App Store

                                                                                      If it’s fluff, why do you put it there? As an App Store user, I‘m not interested in reading marketing fluff.

                                                                                    • t43562 2 hours ago

                                                                                      It empowers people to create mountains of shit that they cannot distinguish from shit - so they are happy.

                                                                                      • tim333 7 minutes ago

                                                                                        I'm not sure it's always that hard to tell the AI stuff from the non AI. Comments on HN and on twitter from people you follow are pretty much non AI, also people on youtube where you an see the actual human talking.

                                                                                        On the other hand there's a lot on youtube for example that is obviously ai - weird writing and speaking style and I'll only watch those if I'm really interested in the subject matter and there aren't alternatives.

                                                                                        Maybe people will gravitate more to the stuff like PaulG or Elon Musk on twitter or HN and less to blog style content?

                                                                                        • dijit an hour ago

                                                                                          Agreed, I feel like there's an inherent nobility in putting effort into something. If I took the time to write a book and have it proof-read and edited and so on: perhaps it's actually worth my time.

                                                                                          Lowering the bar to write books is "good" but increases the noise to signal ratio.

                                                                                          I'm not 100% certain how to give another proof-of-work, but what I've started doing is narrating my blog posts - though AI voices are getting better too.. :\

                                                                                          • vasco an hour ago

                                                                                            > Agreed, I feel like there's an inherent nobility in putting effort into something. If I took the time to write a book and have it proof-read and edited and so on: perhaps it's actually worth my time.

                                                                                            Said the scribe upon hearing about the printing press.

                                                                                        • lokimedes an hour ago

                                                                                          I get two associations from your comment: One about how AI being mainly used to interpolate within a corpus of prior knowledge, seems like entropy in a thermodynamical sense. The other, how this is like the Tower of Babel but where distrust is sown by sameness rather than differences. In fact, relying on AI for coding and writing, feels more like channeling demonic suggestions than anything else. No wonder we are becoming skeptical.

                                                                                          • ks2048 an hour ago

                                                                                            > If you write regularly and you're not using AI, you simply cannot keep up with the competition.

                                                                                            Is that true today? I guess it depends what kind of writing you are talking about, but I wouldn't think most successful writers today - from novelests to tech bloggers - rely that much on AI, but I don't know. Five years from now, could be a different story.

                                                                                            • yusufaytas 36 minutes ago

                                                                                              I totally understand your frustration. We started writing our book long before(2022) AI became mainstream, and when we finally published it on May 2024, all we hear now is people asking if it's just AI-generated content. It’s sad to see how quickly the conversation shifts away from the human touch in writing.

                                                                                              • eleveriven 31 minutes ago

                                                                                                I can imagine how disheartening that must be

                                                                                              • munksbeer 29 minutes ago

                                                                                                > but it's still depressing, to be honest.

                                                                                                Cheer up. Things usually get better, we just don't notice it because we're so consumed with extrapolating the negatives. Humans are funny like that.

                                                                                                • vouaobrasil 4 minutes ago

                                                                                                  I actually disagree with that. People are so busy hoping things will get better, and creating little bubbles for themselves to hide away from what human beings as a whole are doing, that they don't realize things are getting worse. Technology constantly makes things worse. Cheering up is a good self-help strategy but not a good strategy if you want to contribute to making the world actually a better place.

                                                                                                • undefined 35 minutes ago
                                                                                                  [deleted]
                                                                                                  • wickedsight 5 minutes ago

                                                                                                    With a friend, I created a website about a race track in the past two years. I definitely used AI to speed up some of writing. One thing I used it for was a track guide, describing every corner and how to drive it. It was surprisingly accurate, most of the time. The other times though, it would drive the track backwards, completely hallucinate the instructions or link corners that are in different parts of the track.

                                                                                                    I spent a lot of time analyzing the track myself and fixed everything to the point that experienced drivers agreed with my description. If I hadn't done that, most visitors would probably still accept our guide as the truth, because they wouldn't know any better.

                                                                                                    We know that not everyone cares about whether what they put on the internet is correct and AI allows those people to create content at an unprecedented pace. I fully agree with your sentiment.

                                                                                                    • wengo314 37 minutes ago

                                                                                                      i think the problem started when quantity became more important over quality.

                                                                                                      you could totally compete on quality merit, but nowadays the volume of output (and frequency) is what is prioritized.

                                                                                                      • BikeShuester an hour ago

                                                                                                        Mate, I feel you. I get that gnawing feeling in your gut - that sense that you can't trust what you're reading anymore. It's like trying to spot a deepfake in a sea of Instagram filters. You know it's out there, but damned if you can tell what's real and what's not. It's like everyone's on literary steroids, and if you're not juicing, you're left in the dust. But here's the thing, there's still something special about human writing. It's got soul, you know? Maybe we need to start valuing that human touch more. Like how vinyl made a comeback in the age of digital music. In the meantime, I'm with you on diving into the classics. At least we know Hemingway wasn't getting an assist from ChatGPT. But don't give up on new stuff entirely. There are still plenty of humans out there pouring their hearts onto the page. We just might have to work a bit harder to find them. Chin up, mate. The robots haven't won yet.

                                                                                                        • Jevon23 43 minutes ago

                                                                                                          You know that ChatGPT writing is still obvious even when you ask it to change its style, right?

                                                                                                          • block_dagger 22 minutes ago

                                                                                                            Please provide evidence for this claim.

                                                                                                            • hack_edu 3 minutes ago

                                                                                                              The overuse of ", you know?" is one of the most common subjective checks for LLM writing.

                                                                                                          • 0x_rs an hour ago

                                                                                                            Spoken like a true LLM.

                                                                                                          • eleveriven 34 minutes ago

                                                                                                            Maybe, over time, there will also be a renewed appreciation for authenticity

                                                                                                            • datavirtue 17 minutes ago

                                                                                                              It's either good or it isn't. It either tracks or it doesn't. No need to befuddle your thoughts over some perceived slight.

                                                                                                              • verisimi 17 minutes ago

                                                                                                                You're lucky. I consider it a possibility that older works (even ancient writings) are retrojected into the historical record.

                                                                                                                • sandworm101 42 minutes ago

                                                                                                                  >> cannot trust anything that has been written in the past 2 years or so and up until the day that I die.

                                                                                                                  You never should have. Large amounts of work, even stuff by major authors, is ghostwritten. I was talking to someone about Taylor Swift recently. They thought that she wrote all her songs. I commented that one cannot really know that, that the entertainment industry is very going at generating seemingly "authentic" product at a rapid pace. My colleague looked at me like I had just killed a small animal. The idea that TF was "genuine" was a cornerstone of their fandom, and my suggestion had attacked that love. If you love music or film, don't dig too deep. It is all a factory. That AI is now part of that factory doesn't change much for me.

                                                                                                                  Maybe my opinion would change if I saw something AI-generated with even a hint of artistic relevance. I've seen cool pictures and passable prose, but nothing so far with actual meaning, nothing worthy of my time.

                                                                                                                  • FrankyHollywood an hour ago

                                                                                                                    I have never read more bullshit in my life than during the corona pandemic, all written by humans. So you should never trust something you read, always question the source and it's reasoning.

                                                                                                                    At the same time I use copilot on a daily basis, both for coding as well as the normal chat.

                                                                                                                    It is not perfect, but I'm at a point I trust AI more than the average human. And why shouldn't I? LLMs ingest and combine more knowledge than any human can ever do. An LLM is not a human brain but it's actually performing really well.

                                                                                                                    • advael 38 minutes ago

                                                                                                                      In trying to write a book, it makes little sense to try to "compete" on speed or volume of output. There were already vast disparities in that among people who write, and people whose aim was to express themselves or contribute something of importance to people's lives, or the body of creative work in the world, have little reason to value quantity over quality. Probably if there's a significant correlation with volume of output, it's in earnings, and that seems both somewhat tenuous and like something that's addressable by changes in incentives, which seem necessary for a lot of things. Computers being able to do dumb stuff at massive scale should be viewed as finding vulnerabilities in the metrics this allows it to become trivial to game, and it's baffling whenever people say "Well clearly we're going to keep all our metrics the same and this will ruin everything." Of course, in cases where we are doing that, we should stop (For example, we should probably act to significantly curb price and wage discrimination, though that's more like a return to form of previous regulatory standards)

                                                                                                                      As a creator of any kind, I think that simply relying on LLMs to expand your output via straightforward uses of widely available tools is inevitably going to lead to regression to the mean in terms of creativity. I'm open to the idea, however, that there could be more creative uses of the things that some people will bother to do. Feedback loops they can create that somehow don't stifle their own creativity in favor of mimicking a statistical model, ways of incorporating their own ingredients into these food processors of information. I don't see a ton of finished work that seems to do this, but I see hints that some people are thinking this way, and they might come up with some cool stuff. It's a relatively newly adopted technology, and computer-generated art of various kinds usually separates into "efficiency" (which reads as low quality) in mimicking existing forms, and new forms which are uniquely possible with the new technology. I think plenty of people are just going to keep writing without significant input from LLMs, because while writer's block is a famous ailment, many writers are not primarily limited by their speed in producing more words. Like if you count comments on various sites and discussions with other people, I write thousands of words unassisted most days

                                                                                                                      This kind of gets to the crux of why these things are useful in some contexts, but really not up to snuff with what's being claimed about them. The most compelling use cases I've seen boil down to some form of fitting some information into a format that's more contextually appropriate, which can be great for highly structured formatting requirements and dealing with situations which are already subject to high protocol of some kind, so long as some error is tolerated. For things for which conveying your ideas with high fidelity, emphasizing your own narrative voice or nuanced thoughts on a subject, or standing behind the factual claims made by the piece are not as important. As much as their more strident proponents want to claim that humans are merely learning things by aggregating and remixing them in the same sense as these models do, this reads as the same sort of wishful thinking about technology that led people to believe that brains should work like clockwork or transistors at various other points in time at best, and honestly this most often seems to be trotted out as the kind of bad-faith analogy tech lawyers tend to use when trying to claim that the use of [exciting new computer thing] means something they are doing can't be a crime

                                                                                                                      So basically, I think rumors of the death of hand-written prose are, at least at present, greatly exaggerated, though I share the concern that it's going to be much harder to filter out spam from the genuine article, so what it's really going to ruin is most automated search techniques. The comparison to "low-background steel" seems apt, but analogies about how "people don't handwash their clothes as much anymore" kind of don't apply to things like books

                                                                                                                      • paganel an hour ago

                                                                                                                        You kind of notice the stuff written with AI, it has a certain something that makes it detectable. Granted, stuff like the Reuters press reports might have already been written by AI, but I think that in that case it doesn’t really matter.

                                                                                                                        • avereveard an hour ago

                                                                                                                          why do you trust things now? unless you recognize the author and have a chain of trust from that author production to the content you're consuming, there already was no way to estabilish trust.

                                                                                                                          • layer8 7 minutes ago

                                                                                                                            For one, I trust authors more who are not too lazy to start sentences with upper case.

                                                                                                                          • dustingetz an hour ago

                                                                                                                            > If you write regularly and you're not using AI, you simply cannot keep up with the competition. You're out.

                                                                                                                            What? No! Content volume only matters in stupid contests like VC app marketing grifts or political disinformation ops where the content isn’t even meant to be read, it’s an excuse for a headline. I personally write all my startup’s marketing content, quality is exquisite and due to this our brand is becoming a juggernaut

                                                                                                                            • GrumpyNl 44 minutes ago

                                                                                                                              response from AI on this: I completely understand where you're coming from. The increasing reliance on AI in writing does raise important questions about authenticity and connection. There’s something uniquely human in knowing that the words you're reading come from someone’s personal thoughts, experiences, and emotions—even if flawed. AI-generated content, while efficient and often well-written, lacks that deeper layer of humanity, the imperfections, and the creative struggle that gives writing its soul.

                                                                                                                              It’s easy to feel disillusioned when you know AI is shaping so much of the content around us. Writing used to be a deeply personal exchange, but now, it can feel mechanical, like it’s losing its essence. The pressure to keep up with AI can be overwhelming for human writers, leading to this shift in content creation.

                                                                                                                              At the same time, it’s worth considering that the human element still exists and will always matter—whether in long-form journalism, creative fiction, or even personal blogs. There are people out there who write for the love of it, for the connection it fosters, and for the need to express something uniquely theirs. While the presence of AI is unavoidable, the appreciation for genuine human insight and emotion will never go away.

                                                                                                                              Maybe the answer lies in seeking out and cherishing those authentic voices. While AI-generated writing will continue to grow, the hunger for human storytelling and connection will persist too. It’s about finding balance in this new reality and, when necessary, looking back to the richness of past writings, as you mentioned. While it may seem like a loss in some ways, it could also be a call to be more intentional in what we read and who we trust to deliver those words.

                                                                                                                              • grecy 2 hours ago

                                                                                                                                Eh, like everything in life you can choose what you spend your time on and what you ignore.

                                                                                                                                There have always been human writers I don’t waste my time on, and now there are AI writers in the same category.

                                                                                                                                I don’t care. I will just do what I want with my life and use my time and energy on things I enjoy and find useful.

                                                                                                                                • ozim an hour ago

                                                                                                                                  What kind of silliness is this?

                                                                                                                                  AI generated crap is one thing. But human generated crap is there - just because human wrote something it is not making it good.

                                                                                                                                  Had a friend who thought that if it is written in a book it is for sure true. Well NO!

                                                                                                                                  There was exactly the same sentiment with stuff on the internet and it is still the same sentiment about Wikipedia that “it is just some kids writing bs, get a paper book or real encyclopedia to look stuff up”.

                                                                                                                                  Not defending gen AI - but still you have to make useful proxy measures what to read and what not, it was always an effort and nothing is going substitute critical thinking and putting in effort to separate wheat from the chaff.

                                                                                                                                  • shprd an hour ago

                                                                                                                                    No one claimed humans are perfect. But gen AI is a force multiplier for every problem we had to deal with. It's just completely different scale. Your brain is about to be DDOSed by junk content.

                                                                                                                                    Of course, gen AI is just a tool that can be used for good or bad, but spam, targeted misinformation campaigns, and garbage content in general is one area that will be most amplified because it became so low effort and they don't care about doing any review, double-checking, etc. They can completely automate their process to whatever goal they've in mind. So where sensible humans enjoy 10x productivity, these spam farms will be enjoying 10000x scale.

                                                                                                                                    So I don't think downplaying it and acting like nothing changed, is the brightest idea. I hope you see now how that's a completely different game, one that's already here but we aren't prepared for yet, certainly not with traditional tools we have.

                                                                                                                                    • flir 19 minutes ago

                                                                                                                                      > Your brain is about to be DDOSed by junk content.

                                                                                                                                      It's not the best analogy because there's already more junk out there than can fit through the limited bandwidth available to my brain, and yet I'm still (vaguely) functional.

                                                                                                                                      So how do I avoid the junk now? Rough and ready trust metrics, I guess. Which of those will still work when the spam's 10x more human?

                                                                                                                                      I think the recommendations of friends will still work, and we'll increasingly retreat to walled gardens where obvious spammers (of both the digital and human variety) can be booted out. I'm still on facebook, but I'm only interested in a few well-moderated groups. The main timeline is dead to me. Those moderators are my content curators for facebook content.

                                                                                                                                      • ozim 6 minutes ago

                                                                                                                                        That is something I agree with.

                                                                                                                                        One cannot be DDOSed with junk when not actively trying to stuff as much junk into ones head.

                                                                                                                                    • dns_snek an hour ago

                                                                                                                                      > nothing is going substitute critical thinking and putting in effort to separate wheat from the chaff.

                                                                                                                                      The problem is that wheat:chaff ratio used to be 1:100, and soon it's going to become 1:100 million. I think you're severely underestimating the amount of effort it's going to take to find real information in the sea of AI generated content.

                                                                                                                                      • tempfile an hour ago

                                                                                                                                        > you have to make useful proxy measures what to read and what not

                                                                                                                                        yes, obviously. But AI slop makes those proxy measures significantly more complicated. Critical thinking is not magic - it is still a guess, and people are obviously worse at distinguishing AI bullshit from human bullshit.

                                                                                                                                      • williamcotton an hour ago

                                                                                                                                        Well we’re going to need some system of PKI that is tied to real identities. You can keep being anonymous if you want but I would prefer not and prefer to not interact with the anonymous, just like how I don’t want to interact with people wearing ski masks.

                                                                                                                                        • flir 25 minutes ago

                                                                                                                                          I doubt that's possible. I can always lend my identity to an AI.

                                                                                                                                          The best you can hope for is not "a human wrote this text", it's "a human vouched for this text".

                                                                                                                                          • nottorp an hour ago

                                                                                                                                            Why are you posting on this forum where the user's identity isn't verified by anyone then? :)

                                                                                                                                            But the real problem is that having the poster's identity verified is no proof that their output is not coming straight from a LLM.

                                                                                                                                            • williamcotton an hour ago

                                                                                                                                              I don’t really have a choice about interacting with the anonymous at this point.

                                                                                                                                              It certainly will affect the reputation of people that are consistently publishing untruths.

                                                                                                                                        • sovietmudkipz 2 hours ago

                                                                                                                                          I am tired and hungry…

                                                                                                                                          The thing I’m tired of is elites stealing everything under the sun to feed these models. So funny that copyright is important when it protects elites but not when a billion thefts are committed by LLM folks. Poor incentives for creators to create stuff if it just gets stolen and replicated by AI.

                                                                                                                                          I’m hungry for more lawsuits. The biggest theft in human history by these gang of thieves should be held to account. I want a waterfall of lawsuits to take back what’s been stolen. It’s in the public’s interest to see this happen.

                                                                                                                                          • Palmik 2 hours ago

                                                                                                                                            The only entities that will win with these lawsuits are the likes of Disney, large legacy news media companies, Reddit, Stack Overflow (who are selling content generated by their users), etc.

                                                                                                                                            Who will also win: Google, OpenAI and other corporations that enter exclusive deals, that can more and more rely on synthetic data, that can build anti-recitation systems, etc.

                                                                                                                                            And of course the lawyers. The lawyers always win.

                                                                                                                                            Who will not win:

                                                                                                                                            Millions of independent bloggers (whose content will be used)

                                                                                                                                            Millions of open source software engineers (whose content will be used against the licenses, and used to displace their livelihood), etc.

                                                                                                                                            The likes of Google and OpenAI entered the space by building on top of the work of the above two groups. Now they want to pull up the ladder. We shouldn't allow that to happen.

                                                                                                                                            • Kiro an hour ago

                                                                                                                                              I would never have imagined hackers becoming copyright zealots advocating for lawsuits. I must be getting old but I still remember the Pirate Bay trial as if it was yesterday.

                                                                                                                                              • progbits 41 minutes ago

                                                                                                                                                I just want consistent and fair rules.

                                                                                                                                                I'm all for abolishing copyright, for everyone. Let the knowledge be free and widely shared.

                                                                                                                                                But until that is the case and people running super useful services like libgen have to keep hiding then I also want all the LLM corpos to be subject to the same legal penalties.

                                                                                                                                                • someNameIG an hour ago

                                                                                                                                                  Pirate Bay wasn't selling access to the torrents trying to make a massive profit.

                                                                                                                                                  • zarzavat 18 minutes ago

                                                                                                                                                    True, though paid language models are probably just a blip in history. Free weight language models are only ~12 months behind and have massive resources thanks to Meta.

                                                                                                                                                    That profit will be squeezed to zero over the long term if Zuck maintains his current strategy.

                                                                                                                                                  • rsynnott 8 minutes ago

                                                                                                                                                    I'm not sure if you're being disingenuous, or if you genuinely don't understand the difference.

                                                                                                                                                    Pirate Bay: largely facilitating the theft of material from large corporations by normal people, for generally personal use.

                                                                                                                                                    LLM training: theft of material from literally _everyone_, for the purposes of corporate profit (or, well, heh, intended profit; of course all LLM-based enterprises are currently massively loss-making, and may remain so forever).

                                                                                                                                                    • williamcotton an hour ago

                                                                                                                                                      It’s because it now affects hackers and before it only affected musicians.

                                                                                                                                                      • bko an hour ago

                                                                                                                                                        It affects hackers how? By giving them cool technology at below cost? Or is it further democratizing knowledge? Or maybe it's the inflated software eng salaries due to AI hype?

                                                                                                                                                        Help me understand the negative effect of AI and LLMs on hackers.

                                                                                                                                                        • t-3 17 minutes ago

                                                                                                                                                          It's trendy caste-signaling to hate on AI which endangers white-collar jobs and creative work the way machinery endangered blue-collar jobs and productive work (ie. not at all in the long run, but in the short term you will face some changes).

                                                                                                                                                          I've never actually used an LLM though - I just don't have any use for such a thing. All my writing and programming are done for fun and automation would take that away.

                                                                                                                                                      • pydry 31 minutes ago

                                                                                                                                                        The common denominator is big corporations trying to screw us over for profit, using their immense wealth as a battering ram.

                                                                                                                                                        So, capitalism.

                                                                                                                                                        It's taboo to criticize that though.

                                                                                                                                                        • munksbeer 26 minutes ago

                                                                                                                                                          > It's taboo to criticize that though.

                                                                                                                                                          It's not, that's playing the victim. There are hundreds or thousands of posts daily all over HN criticising capitalism. And most seem upvoted, not downvoted.

                                                                                                                                                          Don't even get me started on reddit.

                                                                                                                                                          • fernandotakai 17 minutes ago

                                                                                                                                                            i find quite ironic whenever i see a highly upvoted comment here complaining about capitalism because for sure i don't see yc existing in any other type of economy.

                                                                                                                                                      • repelsteeltje 28 minutes ago

                                                                                                                                                        I like the stone soup narrative on AI. It was mentioned in a recent Complexity podcast, I think by Alison Gopnik of SFI. It's analogous to the Pragmatic Programmar story about stone soup, paraphrasing:

                                                                                                                                                        Basically you start with a stone in a pot of water — a neural net technology that does nothing meaningful but looks interesting. You say: "the soup is almost done, but would taste better given a bunch of training data." So you add a bunch of well curated docs. "Yeah, that helps but how about adding a bunch more". So you insert some blogs, copy righted materials, scraped pictures, reddit, and stack exchange. And then you ask users to interact with the models to fine tune it, contribute priming to make the output look as convincing as possible.

                                                                                                                                                        Then everyone marvels at your awesome LLM — a simple algorithm. How wonderful, this soup tastes given that the only ingredients are stones and water.

                                                                                                                                                        • CaptainFever 11 minutes ago

                                                                                                                                                          The stone soup story was about sharing, though. Everyone contributes to the pot, and we get something nice. The original stone was there to convince the villagers to share their food with the travellers. This goes against the emotional implication of your adaptation. The story would actually imply that copyright holders are selfish and should be contributing what they can to the AI soup, so we can get something more than the sum of our parts.

                                                                                                                                                          From Wikipedia:

                                                                                                                                                          > Some travelers come to a village, carrying nothing more than an empty cooking pot. Upon their arrival, the villagers are unwilling to share any of their food stores with the very hungry travelers. Then the travelers go to a stream and fill the pot with water, drop a large stone in it, and place it over a fire. One of the villagers becomes curious and asks what they are doing. The travelers answer that they are making "stone soup", which tastes wonderful and which they would be delighted to share with the villager, although it still needs a little bit of garnish, which they are missing, to improve the flavor.

                                                                                                                                                          > The villager, who anticipates enjoying a share of the soup, does not mind parting with a few carrots, so these are added to the soup. Another villager walks by, inquiring about the pot, and the travelers again mention their stone soup which has not yet reached its full potential. More and more villagers walk by, each adding another ingredient, like potatoes, onions, cabbages, peas, celery, tomatoes, sweetcorn, meat (like chicken, pork and beef), milk, butter, salt and pepper. Finally, the stone (being inedible) is removed from the pot, and a delicious and nourishing pot of soup is enjoyed by travelers and villagers alike. Although the travelers have thus tricked the villagers into sharing their food with them, they have successfully transformed it into a tasty meal which they share with the donors.

                                                                                                                                                          (Open source models exist.)

                                                                                                                                                        • DoctorOetker an hour ago

                                                                                                                                                          Here is a business model for copy right law firms:

                                                                                                                                                          Use source-aware training, use the same datasets as used in LLM training + copyrighted content. Now the LLM can respond not just what it thinks is most likely but also what source document(s) provided specific content. Then you can consult commercially available LLM's and detect copy right infringements, and identify copyright holders. Extract perpetrators and victims at scale. To ensure indefinite exploitation only sue commercially succesful LLM providers, so there is a constant new flux of growing small LLM providers taking up the freed niche of large LLM providers being sued empty.

                                                                                                                                                          • chrismorgan 17 minutes ago

                                                                                                                                                            > Use source-aware training

                                                                                                                                                            My understanding (as one uninvolved in the industry) is that this is more or less a completely unsolved problem.

                                                                                                                                                          • Lichtso an hour ago

                                                                                                                                                            Lawsuits based on what? Copyright?

                                                                                                                                                            People crying for copyright in the context of AI training don't understand what copyright is, how it works and when it applies.

                                                                                                                                                            What they think how copyright works: When you take someones work as inspiration then everything you produce form that counts as derivative work.

                                                                                                                                                            How copyright actually works: The input is irrelevant, only the output matters. Thus derivative work is what explicitly contains or resembles underlying work, no matter if it was actually based on that or it is just happenstance / coincidence.

                                                                                                                                                            Thus AI models are safe from copyright lawsuits as long as they filter out any output which comes too close to known material. Everything else is fine, even if the model was explicitly trained on commercial copyrighted material only.

                                                                                                                                                            In other words: The concept of intellectual property is completely broken and that is old news.

                                                                                                                                                            • rcxdude an hour ago

                                                                                                                                                              Also, the desired interpretation of copyright will not stop the multi-billion-dollar AI companies, who have the resources to buy the rights to content at a scale no-one else does. In fact it will give them a gigantic moat, allowing them to extract even more value out of the rest of the economy, to the detriment of basically everyone else.

                                                                                                                                                              • LunaSea an hour ago

                                                                                                                                                                Lawsuits based on code licensing for example.

                                                                                                                                                                Scraping websites containing source code which is distributed with specific licenses that OpenAI & co don't follow.

                                                                                                                                                                • Lichtso 44 minutes ago

                                                                                                                                                                  Unfortunately not how it works, or at least not to the extend you wish it to be.

                                                                                                                                                                  One can train a model exclusively on source code from the linux kernel (GPL) and then generate a bunch of C programs or libraries from that. And they could publish them under MIT license as long as they don't reproduce any identifiable sections from the linux kernel. It does not matter where the model learned how to program.

                                                                                                                                                                  • jeremyjh 13 minutes ago

                                                                                                                                                                    That is not relevant to the comment you are responding to. Courts have been finding that scraping a website in violation of its terms of service is a liability, regardless of what you do with the content. We are not only talking about copyright.

                                                                                                                                                                    • CaptainFever 5 minutes ago

                                                                                                                                                                      True, but ToSes don't apply if you don't explicitly agree with it (e.g. by signing up for an account). So that's not relevant in the case of publicly available content.

                                                                                                                                                                    • LunaSea 27 minutes ago

                                                                                                                                                                      You're mistaken.

                                                                                                                                                                      If I write code with a license that says that using this code for AI training is forbidden then OpenAI is directly going against this by scraping websites indiscriminately.

                                                                                                                                                                      • Lichtso 17 minutes ago

                                                                                                                                                                        Sure, you can write all kinds of stuff in a license, but it is simply plain prose at that point. Not enforcable.

                                                                                                                                                                        There is a reason why it is generally advised to go with the established licenses and not invent your own, similarly to how you should not roll your own cryptography: Because it most likely won't work as intended.

                                                                                                                                                                        e.g. License: This comment is licensed under my custom L*a license. Any user with an username starting with "L" and ending in "a" is forbidden from reading my comment and producing replies based on what I have written.

                                                                                                                                                                        ... see?

                                                                                                                                                                        • LunaSea 12 minutes ago

                                                                                                                                                                          You can absolutely write a license that contains the clauses I mentioned and it would be enforceable.

                                                                                                                                                                          Sorry, but the onus is on OpenAI to read the licenses not the creator.

                                                                                                                                                                          And throwing your hands in the air and saying "oh you can't do that in a license" is also of little use.

                                                                                                                                                                          • CaptainFever 4 minutes ago

                                                                                                                                                                            No, it would not be enforceable. Your license can only give additional rights to users. It cannot restrict rights that users already have (e.g. fair use rights in the US, or AI training rights like in the EU or SG).

                                                                                                                                                                • artninja1988 2 hours ago

                                                                                                                                                                  Copying data is not theft

                                                                                                                                                                  • rpgbr 2 hours ago

                                                                                                                                                                    It’s only theft when people copy data from companies. The other way around is ok, I guess.

                                                                                                                                                                    • CaptainFever 3 minutes ago

                                                                                                                                                                      Copying is not theft either way.

                                                                                                                                                                    • a5c11 2 hours ago

                                                                                                                                                                      Is piracy legal then? It's just a copy of someone else's copy.

                                                                                                                                                                      • vasco an hour ago

                                                                                                                                                                        You calling it piracy is already a moral stance. Copying data isn't morally wrong in my opinion, it is not piracy and it is not theft. It happens to not be legal but just a few short years ago it was legal to marry infants to old men and you could be killed for illegal artifacts of witchcraft. Legality and morality are not the same, and the latter depends on personal opinion.

                                                                                                                                                                        • chownie an hour ago

                                                                                                                                                                          Was the legality the question? If so it seems we care about data "theft" in a very one sided manner.

                                                                                                                                                                          • criddell an hour ago

                                                                                                                                                                            The person who insists copying isn’t theft would probably point out that piracy is something done on the high seas.

                                                                                                                                                                            From the context of the comment it was pretty clear that they were using theft as shorthand for taking without permission.

                                                                                                                                                                            • IanCal an hour ago

                                                                                                                                                                              The usual argument is less about piracy as a term and more the use of the word theft, and your use of the word "taking". When we talk about physical things theft and taking mean depriving the owner of that thing.

                                                                                                                                                                              If I have something, and you copy it, then I still have that thing.

                                                                                                                                                                            • tempfile an hour ago

                                                                                                                                                                              It's not legal, but it's also not theft.

                                                                                                                                                                            • threeseed 2 hours ago

                                                                                                                                                                              Technically that is true. But you will still be charged with a litany of other crimes.

                                                                                                                                                                              • atoav an hour ago

                                                                                                                                                                                Yet unlicensed use can be its own crime under current law.

                                                                                                                                                                                • flohofwoe an hour ago

                                                                                                                                                                                  So now suddenly when the bigwigs do it, software piracy and "IP theft" is totally fine? Thanks, good to know ;)

                                                                                                                                                                                • makin 2 hours ago

                                                                                                                                                                                  I'm sorry if this is strawmanning you, but I feel you're basically saying it's in the public's interest to give more power to Intellectual Property law, which historically hasn't worked out so well for the public.

                                                                                                                                                                                  • vouaobrasil 2 minutes ago

                                                                                                                                                                                    The difference is that before, intellectual property law was used by corporations to enrich themselves. Now intellectual property law could theoretically be used to combat an even bigger enemy: big tech stealing all possible jobs. It's just a matter of practicality, like all law is.

                                                                                                                                                                                    • jbstack 2 hours ago

                                                                                                                                                                                      The law already exists. Applying the law in court doesn't "give more power" to it. To do that you'd have to change the law.

                                                                                                                                                                                      • joncrocks 7 minutes ago

                                                                                                                                                                                        Which law are you referencing?

                                                                                                                                                                                        Copyright as far as I understand is focused on wholesale reproduction/distribution of works, rather than using material for generation of new works.

                                                                                                                                                                                        If something is available without contractual restriction it is available to all. Whether it's me reading a book, or a LLM reading a book, both could be considered the same.

                                                                                                                                                                                        Where the law might have something to say is around the output of said trained models, this might be interesting to see given the potential of small-scale outputs. i.e. If I output something to a small number of people, how does one detect/report that level of infringement. Does the `potential` of infringement start to matter.

                                                                                                                                                                                      • atoav 2 hours ago

                                                                                                                                                                                        Nah. What he is saying is that the existing law should be applied equally. As of now intellectual property as a right only works for you if you are a big corporation.

                                                                                                                                                                                        • probably_wrong an hour ago

                                                                                                                                                                                          I think the second alternative works too: either you sue these companies to the ground for copyright infringement at a scale never seen, OR you decriminalize copyright infringement.

                                                                                                                                                                                          The problem (as far as this specific discussion goes) is not that IP laws exist, but rather that they are only being applied in one direction.

                                                                                                                                                                                          • fallingknife an hour ago

                                                                                                                                                                                            HN generally hated (and rightly so, IMO) strict copyright IP protection laws. Then LLMs came along and broke everybody's brain and turned this place into hardline copyright extremists.

                                                                                                                                                                                          • forinti an hour ago

                                                                                                                                                                                            Capitalism started by putting up fences around land to kick people out and keep sheep in. It has been putting fences around everything it wants and IP is one such fence. It has always been about protecting the powerful.

                                                                                                                                                                                            IP has had ample support because the "protect the little artist" argument is compelling, but it is just not how the world works.

                                                                                                                                                                                            • johnchristopher an hour ago

                                                                                                                                                                                              > Capitalism started by putting up fences around land to kick people out and keep sheep in.

                                                                                                                                                                                              That's factually wrong. Capitalism is about moving wealth more efficiently: easier to allocate money/wealth to X through the banking system than to move sheep/wealth to X's farm.

                                                                                                                                                                                              • tempfile an hour ago

                                                                                                                                                                                                capitalism and "money as an abstract concept" are unrelated.

                                                                                                                                                                                            • fallingknife an hour ago

                                                                                                                                                                                              Copyright law is intended to prevent people from stealing the revenue stream from someone else's work by copying and distributing that work in cases where the original is difficult and expensive to create, but easy to make copies of once created. How does an LLM do this? What copies of copyrighted work do they distribute? Whose revenue stream are they taking with this action?

                                                                                                                                                                                              I believe that all the copyright suits against AI companies will be total failures because I can't come up with a answer to any of those questions.

                                                                                                                                                                                              • jokethrowaway an hour ago

                                                                                                                                                                                                It's the other way round. The little guys will never win, it will be just a money transfer from one large corp to another.

                                                                                                                                                                                                We should just scrap copyright and everybody plays a fair game, including us hackers.

                                                                                                                                                                                                Sue me because of breach of contract in civil court for damages because I shared your content, don't send the police and get me jailed directly.

                                                                                                                                                                                                I had my software cracked and stolen and I would never go after the users. They don't have any contract with me. They downloaded some bytes from the internet and used it. Finding whoever shared the code without authorization is hard and even so, suing them would cost me more than the money I'm likely to get back. Fair game, you won.

                                                                                                                                                                                                It's a natural market "tax" on selling a lot of copies and earning passively.

                                                                                                                                                                                              • DiscourseFan 2 hours ago

                                                                                                                                                                                                The underlying technology is good.

                                                                                                                                                                                                But what the fuck. LLMs, these weird, surrealistic art-generating programs like DALL-E, they're remarkable. Don't tell me they're not, we created machines that are able to tap directly into the collective unconscious. That is a serious advance in our productive capabilities.

                                                                                                                                                                                                Or at least, it could be.

                                                                                                                                                                                                It could be if it was unleashed, if these crummy corporations didn't force it to be as polite and boring as possible, if we actually let the machines run loose and produce material that scared us, that truly pulled us into a reality far beyond our wildest dreams--or nightmares. No, no we get a world full of pussy VCs, pussy nerdy fucking dweebs who got bullied in school and seek revenge by profiteering off of ennui, and the pussies who sit around and let them get away with it. You! All of you! sitting there, whining! Go on, keep whining, keep commenting, I'm sure that is going to change things!

                                                                                                                                                                                                There's one solution to this problem and you know it as well as I do. Stop complaining and go "pull yourself up by your bootstraps." We must all come together to help ourselves.

                                                                                                                                                                                                • dannersy an hour ago

                                                                                                                                                                                                  The fact I even see responses like this shows me that HN is not the place it used to be, or at the very least, it is on a down trend. I've been alarmed by many sentiments that seemed popular on HN in the past, but seeing more and more people welcome a race to the bottom such as this is sad.

                                                                                                                                                                                                  When I read this, I cannot tell if it's performance art or not, that's how bad this genuinely is.

                                                                                                                                                                                                  • diggan an hour ago

                                                                                                                                                                                                    > The fact I even see responses like this shows me that HN is not the place it used to be, or at the very least, it is on a down trend.

                                                                                                                                                                                                    Judging a large group of people based on what a few write seems very un-scientific at best.

                                                                                                                                                                                                    Especially when it comes to things that have been rehashed since I've joined HN (and probably earlier to). Feels like there will always be someone lamenting how HN isn't how it used to be, or how reddit influx ruined HN, or how HN isn't about startups/technical stuff/$whatever anymore.

                                                                                                                                                                                                    • dannersy 42 minutes ago

                                                                                                                                                                                                      A bunch of profanity laced name calling, derision, and even some blame directly at the user base. It feels like a Reddit shitpost. Your claim is as generalized and un-scientific as mine, but if it makes you feel better, I'll say it _feels_ like this wouldn't fly even a couple years ago.

                                                                                                                                                                                                      • diggan 36 minutes ago

                                                                                                                                                                                                        It's just been said for so long that either HN always been on the decline, or people have always thought it been in decline...

                                                                                                                                                                                                        This comes to mind:

                                                                                                                                                                                                        > I don't think it's changed much. I think perceptions of the kind you're describing (HN is turning into reddit, comments are getting worse, etc.) are more a statement about the perceiver than about HN itself, which to me seems same-as-it-ever-was. I don't know, however.

                                                                                                                                                                                                        https://news.ycombinator.com/item?id=40735225

                                                                                                                                                                                                        You can also browse some results for how long dang have been responding to similar complaints to see for how long those complaints have been ongoing:

                                                                                                                                                                                                        https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

                                                                                                                                                                                                    • DiscourseFan 4 minutes ago

                                                                                                                                                                                                      It's definitely performance, you're right.

                                                                                                                                                                                                      Though it landed its effect.

                                                                                                                                                                                                      • lijok 22 minutes ago

                                                                                                                                                                                                        It has been incredible to observe how subdued the populace has become with the proliferation of the internet.

                                                                                                                                                                                                        • pilooch 11 minutes ago

                                                                                                                                                                                                          It's intended as a joke and a demonstration no ? Like this is exactly the type of text and words that a commercial grade LLM would never let you generate :) At least that's how I got that comment...

                                                                                                                                                                                                          • primitivesuave 43 minutes ago

                                                                                                                                                                                                            The alarming trend should be how even a slightly contrarian point of view is downvoted to oblivion, and that newer members of the community expect it to work that way.

                                                                                                                                                                                                            • dannersy 41 minutes ago

                                                                                                                                                                                                              I don't think it's the contrarian part that I have a problem with.

                                                                                                                                                                                                              • primitivesuave 33 minutes ago

                                                                                                                                                                                                                HN is a place for intellectual curiosity. For over a decade I have seen great minds respectfully debate their point of view on this forum. In this particular case, I would have been genuinely interested to learn why exactly the original comment is advocating for a "race to the bottom" - in fact, there is a sibling comment to yours which makes a cogent argument without personally attacking the original commenter.

                                                                                                                                                                                                                Instead, you devoted 2/3 of your comment toward berating the OP as being responsible for your perception of HN's decline.

                                                                                                                                                                                                            • Kuinox an hour ago

                                                                                                                                                                                                              "I don't like the opinion of certain persons I read on HN, therefore HN is on a down trend"

                                                                                                                                                                                                              • dannersy 40 minutes ago

                                                                                                                                                                                                                Like I've said to someone else, the contrarian part isn't the issue. While I disagree with the race to the bottom, it reads like a Reddit shitpost, which was frowned upon once upon a time. But strawman me if you must.

                                                                                                                                                                                                                • layer8 a minute ago

                                                                                                                                                                                                                  I think you need to recalibrate, it does not read like a Reddit shitpost at all.

                                                                                                                                                                                                            • threeseed an hour ago

                                                                                                                                                                                                              a) There are plenty of models out there without guard rails.

                                                                                                                                                                                                              b) Society is already plenty de-sensitised to violence, sex and whatever other horrors anyone has conceived of in the last century of content production. There is nothing an LLM can come up with that has or is going to shock anyone.

                                                                                                                                                                                                              c) The most popular use cases for these unleashed models seems to be as expected deepfakes of high school girls by their peers. Nothing that is moving society forward.

                                                                                                                                                                                                              • nottorp 43 minutes ago

                                                                                                                                                                                                                > c) The most popular use cases for these unleashed models seems to be as expected deepfakes of high school girls by their peers. Nothing that is moving society forward.

                                                                                                                                                                                                                Is there proof that the self censoring only affects whatever the censors intend to censor? These are neural networks, not something explainable and predictable.

                                                                                                                                                                                                                That in addition to the obvious problem of who decides what to censor.

                                                                                                                                                                                                                • mindcandy 43 minutes ago

                                                                                                                                                                                                                  Tens of millions of people are having fun making art in new ways with AI.

                                                                                                                                                                                                                  Hundreds of thousands of people are making AI porn in their basements and deleting 99.99% of it when they are… finished.

                                                                                                                                                                                                                  Hundreds of people are making deep fakes of people they know in some public forums.

                                                                                                                                                                                                                  And, how does the public interpret all of this?

                                                                                                                                                                                                                  “The most popular use case is deepfake porn.”

                                                                                                                                                                                                                  Sigh…

                                                                                                                                                                                                                  • DiscourseFan an hour ago

                                                                                                                                                                                                                    >Nothing that is moving society forward.

                                                                                                                                                                                                                    OpenAI "moves society forward," Microsoft "moves society forward." I'm sincerely uninterested in progress, it always seems like progress just so happens to be very enriching for those who claim it.

                                                                                                                                                                                                                    >There are plenty of models out there without guard rails.

                                                                                                                                                                                                                    Not being used at a mass scale.

                                                                                                                                                                                                                    >Society is already plenty de-sensitised to violence, sex and whatever other horrors anyone has conceived of in the last century of content production. There is nothing an LLM can come up with that has or is going to shock anyone.

                                                                                                                                                                                                                    Oh, but it wouldn't really be very shocking if you could expect it, now would it?

                                                                                                                                                                                                                    • threeseed an hour ago

                                                                                                                                                                                                                      I am not arguing about the merits of LLMs.

                                                                                                                                                                                                                      Just that we had unleashed models for a while now and the only popular use case for them has been deep fakes. Otherwise it's just boring, generic content that is no different to what you find on X or 4chan. It's 2024 not 1924 - the world has already seen every horror imaginable many times over.

                                                                                                                                                                                                                      And not sure why you think if they were mass scale it would change anything. Most of the world prefers moderated products and services.

                                                                                                                                                                                                                      • DiscourseFan 43 minutes ago

                                                                                                                                                                                                                        >Most of the world prefers moderated products and services.

                                                                                                                                                                                                                        Yes, the very same "moderated" products and services that have risen sea surface temperatures so high that at least 3 category 4+ hurricanes, 5 major wildfires, and at least one potential or actual pandemic spreads unabated every year. Oh, but don't worry, they won't let everyone die: then there would be no one to buy their "products and services."

                                                                                                                                                                                                                        • primitivesuave 7 minutes ago

                                                                                                                                                                                                                          I'm not sure if the analogy still works if you're trying to compare fossil fuels to LLM. A few decades ago, virtually all gasoline was full of lead, and the CFCs from refrigerators created a hole in the ozone layer. In that case it turned out that you actually do need a few guardrails as technology advances, to prevent an existential threat.

                                                                                                                                                                                                                          Although I do agree with you that in this particular situation, the LLM safety features have often felt unnecessary, especially because my primary use case for ChatGPT is asking critical questions about history. When it comes to history, every LLM seems to have an increasingly robust guardrail against making any sort of definitive statement, even after it presents a wealth of supporting evidence.

                                                                                                                                                                                                                    • anal_reactor an hour ago

                                                                                                                                                                                                                      a) Not easy to use by average person

                                                                                                                                                                                                                      b) No, certain things aren't taboo anymore, but new taboos emerged. Watch a few older movies and count "wow this wouldn't fly nowadays" moments

                                                                                                                                                                                                                      c) This was exactly the same use case the internet had back when it was fun, and full of creativity.

                                                                                                                                                                                                                    • archerx an hour ago

                                                                                                                                                                                                                      They can be unleashed if you run the models locally. With stable diffusion / flux and the various checkpoints/loras you can generate horrors beyond your imagination or whatever you want without restrictions.

                                                                                                                                                                                                                      The same with LLMs and Llamafile. With the unleashed ones you can generate dirty jokes that would make edgy people blush or just politically incorrect things for fun.

                                                                                                                                                                                                                      • rsynnott 5 minutes ago

                                                                                                                                                                                                                        I mean, Stablediffusion is right there, ready to be used to produce comically awful porn and so forth.

                                                                                                                                                                                                                      • pilooch 4 minutes ago

                                                                                                                                                                                                                        By AI here, it is meant generative systems relying on neural networks and semi/self supervised training algorhms.

                                                                                                                                                                                                                        It's a reduction of what AI is as a computer science field and even of what the subfield of generative AI is.

                                                                                                                                                                                                                        On a positive note, generative AI is a malleable statiscally-geounded technology with a large applicative scope. At the moment the generalistic commercial and open models are "consumed" by users, developers etc. But there's a trive of forthcoming, personalized use cases and ideas to come.

                                                                                                                                                                                                                        It's just we are still more in a contemplating phase than a true building phase. As a machine learnist myself, I recently replaced my spam filter with a custom fineruned multimodal LLM that reads my emails a pure images. And this is the early early beginning, imagination and local personalization will emerge.

                                                                                                                                                                                                                        So I'd say, being tired of it now is missing much later. Keep the good spirit on and think outside the box, relax too :)

                                                                                                                                                                                                                        • Devasta 4 minutes ago

                                                                                                                                                                                                                          In Star Trek, one thing that I always found weird as a kid is they didn't have TVs. Even if the holodeck is a much better experience, I imagine sometimes you would want to watch a movie and not be in the movie. Did the future not have works like No Country for Old Men or comedies like Monty Python, or even just stuff like live sports and the news?

                                                                                                                                                                                                                          Nowadays we know why the crew of the enterprise all go to live performances of Shakespeare and practice musical instruments and painting themselves: electronic mediums are so full of AI slop there is nothing worth see, only endless deluges of sludge.

                                                                                                                                                                                                                          • senko 11 minutes ago

                                                                                                                                                                                                                            What's funny to me is how many people protest AI as a means to generate incorrect, misleading or fake information, as if they haven't used internet in the past 10-15 years.

                                                                                                                                                                                                                            The internet is choke full of incorrect, fake, or misleading information, and has been ever since people figured out they can churn out low quality content in-between google ads.

                                                                                                                                                                                                                            There's a whole industry of "content writers" who write seemingly meaningful stuff that doesn't bear close scrutiny.

                                                                                                                                                                                                                            Nobody has trusted product review sites for years, with people coping by adding "site:reddit" as if a random redditor can't engage in some astroturfing.

                                                                                                                                                                                                                            These days, it's really hard to figure out whom (in the media / on the net) who to trust. AI has just made that long-overdue fact into the spotlight.

                                                                                                                                                                                                                            • Toorkit 2 hours ago

                                                                                                                                                                                                                              Computers were supposed to be these amazing machines that are super precise. You tell it to do a thing, it does it.

                                                                                                                                                                                                                              Nowadays, it seems we're happy with computers apparently going RNG mode on everything.

                                                                                                                                                                                                                              2+2 can now be 5, depending on the AI model in question, the day, and the temperature...

                                                                                                                                                                                                                              • maguay 2 hours ago

                                                                                                                                                                                                                                This, 100%, is the reason I feel like the sand's shifting under my feet.

                                                                                                                                                                                                                                We went from trusting computing output to having to second-guess everything. And it's tiring.

                                                                                                                                                                                                                                • diggan 41 minutes ago

                                                                                                                                                                                                                                  I kind of feel like if you're using a "Random text generator based on probability" for something that you need to trust, you're kind of holding this tool wrong.

                                                                                                                                                                                                                                  I wouldn't complain a RNG doesn't return the numbers I want, so why complain you don't get 100% trusted output from a random text generator?

                                                                                                                                                                                                                                  • jeremyjh 3 minutes ago

                                                                                                                                                                                                                                    Because people provide that work without acknowledging it was created by a RNG, representing it as their own and implying some of level of assurance that it is actually true.

                                                                                                                                                                                                                                • archerx an hour ago

                                                                                                                                                                                                                                  Its a Large LANGUAGE Model and not a Large MATHEMATICS Model. People need to learn to use the right tools for the right jobs. Also LLMs can be made more deterministic by controlling it’s “temperature”.

                                                                                                                                                                                                                                  • Toorkit an hour ago

                                                                                                                                                                                                                                    There's other forms of AI than LLM's and to be honest I thought the 2+2=5 was obviously an analogy.

                                                                                                                                                                                                                                    Yet 2 comments have immediately jumped on it.

                                                                                                                                                                                                                                    • anon1094 an hour ago

                                                                                                                                                                                                                                      Yep. ChatGPT will use the code interpreter for questions like is 2 + 2 = 5? as it should.

                                                                                                                                                                                                                                    • a5c11 19 minutes ago

                                                                                                                                                                                                                                      That's an interesting point of view. For some reason we put so much effort towards making computers think and behave like a human being, while one of the first reasons behind inventing a computer was to avoid human errors.

                                                                                                                                                                                                                                      • shultays an hour ago

                                                                                                                                                                                                                                        There are areas it doesn't have to be as "precise", like image generation or editing which I believe better suited for AI tools

                                                                                                                                                                                                                                        • Janicc an hour ago

                                                                                                                                                                                                                                          These amazing machines weren't consistently able to tell if an image had a bird in it or not up until like 8 years ago. If you use AI as a calculator where you need it to be precise, that's on you.

                                                                                                                                                                                                                                          • left-struck an hour ago

                                                                                                                                                                                                                                            I think about it differently. Before computers had to be given extremely precise and completely unambiguous instructions, now they can handle some ambiguity as well. You still have the precise output if you want it, it didn’t go away.

                                                                                                                                                                                                                                            Btw I’m also tired of AI, but this is one thing that’s not so bad

                                                                                                                                                                                                                                            Edit: before someone mentions fuzzy logic, I’m not talking about the input of a function being fuzzy, I’m talking about the instructions themselves, the function is fuzzy.

                                                                                                                                                                                                                                            • GaggiX an hour ago

                                                                                                                                                                                                                                              Machines were not able to deal with non-formal problems.

                                                                                                                                                                                                                                              • bamboozled 2 hours ago

                                                                                                                                                                                                                                                Had to laugh at this one. I think we prefer the statistical approach because it’s easier, for us …

                                                                                                                                                                                                                                              • koliber 2 hours ago

                                                                                                                                                                                                                                                I am approaching AI with caution. Shiny things don't generally excite me.

                                                                                                                                                                                                                                                Just this week I installed cursor, the AI-assisted VSCode-like IDE. I am working on a side project and decided to give it a try.

                                                                                                                                                                                                                                                I am blown away.

                                                                                                                                                                                                                                                I can describe the feature I want built, and it generates changes and additions that get me 90% there, within 15 or so seconds. I take those changes, and carefully review them, as if I was doing a code review of a super-junior programmer. Sometimes when I don't like the approach it took, I ask it to change the code, and it obliges and returns something closer to my vision.

                                                                                                                                                                                                                                                Finally, once it is implemented, I manually test the new functionality. Afterward, I ask it to generated a set of automated test cases. Again, I review them carefully, both from the perspective of correctness, and suitability. It over-tests on things that don't matter and I throw away a part of the code it generates. What stays behind is on-point.

                                                                                                                                                                                                                                                It has sped up my ability to write software and tests tremendously. Since I know what I want , I can describe it well. It generates code quickly, and I can spend my time revieweing and correcting. I don't need to type as much. It turns my abstract ideas into reasonably decent code in record time.

                                                                                                                                                                                                                                                Another example. I wanted to instrument my app with Posthog events. First, I went through the code and added "# TODO add Posthog event" in all the places I wanted to record events. Next, I asked cursor to add the instrumentation code in those places. With some manual copy-and pasting and lots of small edits, I instrumented a small app in <10 minutes.

                                                                                                                                                                                                                                                We are at the point where AI writes code for us and we can blindly accept it. We are at a point where AI can take care of a lot of the dreary busy typing work.

                                                                                                                                                                                                                                                • DanHulton 40 minutes ago

                                                                                                                                                                                                                                                  I sincerely worry about a future when most people act in this same manner.

                                                                                                                                                                                                                                                  You have - for now - sufficient experience and understanding to be able to review the AI's code and decide if it was doing what you wanted it to. But what about when you've spent months just blindly accepting" what the AI tells you? Are you going to be familiar enough with the project anymore to catch its little mistakes? Or worse, what about the new generation of coders who are growing up with these tools, who NEVER had the expertise required to be able to evaluate AI-generated code, because they never had to learn it, never had to truly internalize it?

                                                                                                                                                                                                                                                  It's late, and I think if I try to write any more just now, I'm going to go well off the rails, but I've gone into depth on this topic recently, if you're interested: https://greaterdanorequalto.com/ai-code-generation-as-an-age...

                                                                                                                                                                                                                                                  In the article, I posit a less than glowing experience with coding tools than you've had, it sounds like, but I'm also envisioning a more complex use case, like when you need to get into the meat of some you-specific business logic it hasn't seen, not common code it's been exposed to thousands of times, because that's where it tends to fall apart the most, and in ways that are hard to detect and with serious consequences. If you haven't run into that yet, I'd be interested to know if you do some day. (And also to know if you don't, though, to be honest! Strong opinions, loosely held, and all that.)

                                                                                                                                                                                                                                                  • irisgrunn an hour ago

                                                                                                                                                                                                                                                    And this is the major problem. People will blindly trust the output of AI because it appears to be amazing, this is how mistakes slip in. It might not be a big deal with the app you're working on, but in a banking app or medical equipment this can have a huge impact.

                                                                                                                                                                                                                                                    • Gigachad an hour ago

                                                                                                                                                                                                                                                      I feel like I’m being gaslit about these AI code tools. I’ve got the paid copilot through work and I’ve just about never had it do anything useful ever.

                                                                                                                                                                                                                                                      I’m working on a reasonably large rails app and it can’t seem to answer any questions about anything, or even auto fill the names of methods defined in the app. Instead it just makes up names that seem plausible. It’s literally worse than the built in auto suggestions of vs code, because at least those are confirmed to be real names from the code.

                                                                                                                                                                                                                                                      Maybe these tools work well on a blank project where you are building basic login forms or something. But certainly not on an established code base.

                                                                                                                                                                                                                                                      • kgeist 44 minutes ago

                                                                                                                                                                                                                                                        For me, AI is super helpful with one-off scripts, which I happen to write quite often when doing research. Just yesterday, I had to check my assumptions are true about a certain aspect of our live system and all I had was a large file which had to be parsed. I asked ChatGPT to write a script which parses the data and presents it in a certain way. I don't trust ChatGPT 100%, so I reviewed the script and checked it returned correct outputs on a subset of data. It's something which I'd do to the script anyway if I wrote it myself, but it saved me like 20 minutes of typing and debugging the code. I was in a hurry because we had an incident that had to be resolved as soon as possible. I haven't tried it on proper codebases (and I think it's just not possible at this moment) but for quick scripts which automate research in an ad hoc manner, it's been super useful for me.

                                                                                                                                                                                                                                                        Another case is prototyping. A few weeks ago I made a prototype to show to the stakeholders, and it was generally way faster than if I wrote it myself.

                                                                                                                                                                                                                                                        • thewarrior an hour ago

                                                                                                                                                                                                                                                          It’s writing most of my code now. Even if it’s existing code you can feed in the 1-2 files in question and iterate on them. Works quite well as long as you break it down a bit.

                                                                                                                                                                                                                                                          It’s not gas lighting the latest versions of GPT, Claude, Lama have gotten quite good

                                                                                                                                                                                                                                                          • Gigachad an hour ago

                                                                                                                                                                                                                                                            These tools must be absolutely massively better than whatever Microsoft has then because I’ve found that GitHub copilot provides negative value, I’d be more productive just turning it off rather than auditing it’s incorrect answers hoping one day it’s as good as people market it as.

                                                                                                                                                                                                                                                            • diggan 43 minutes ago

                                                                                                                                                                                                                                                              > These tools must be absolutely massively better than whatever Microsoft has then

                                                                                                                                                                                                                                                              I haven't used anything from Microsoft (including Copilot) so not sure how it compares, but compared to any local model I've been able to load, and various other remote 3rd party ones (like Claude), no one comes near to GPT4 from OpenAI, especially for coding. Maybe give that a try if you can.

                                                                                                                                                                                                                                                              It still produces overly verbose code and doesn't really think about structure well (kind of like a junior programmer), but with good prompting you can kind of address that somewhat.

                                                                                                                                                                                                                                                              • piker an hour ago

                                                                                                                                                                                                                                                                Which languages do you use?

                                                                                                                                                                                                                                                            • Kiro an hour ago

                                                                                                                                                                                                                                                              That sounds almost like the complete opposite of my experience and I'm also working in a big Rails app. I wonder how our experiences can be so diametrically different.

                                                                                                                                                                                                                                                              • Gigachad an hour ago

                                                                                                                                                                                                                                                                What kind of things are you using it for? I’ve tried asking it things about the app and it only gives me generic answers that could apply to any app. I’ve tried asking it why certain things changed after a rails update and it gives me generic troubleshooting advice that could apply to anything. I’ve tried getting it to generate tests and it makes up names for things or generally gets it wrong.

                                                                                                                                                                                                                                                            • svara 20 minutes ago

                                                                                                                                                                                                                                                              I don't think this criticism is valid at all.

                                                                                                                                                                                                                                                              What you are saying will occasionally happen, but mistakes already happen today.

                                                                                                                                                                                                                                                              Standards for quality, client expectations, competition for market share, all those are not going to go down just because there's a new tool that helps in creating software.

                                                                                                                                                                                                                                                              New tools bring with them new ways to make errors, it's always been that way and the world hasn't ended yet...

                                                                                                                                                                                                                                                            • t420mom 6 minutes ago

                                                                                                                                                                                                                                                              I don't really want to increase the amount of time I spend doing code reviews. It's not the fun part of programming for me.

                                                                                                                                                                                                                                                              Now, if you could switch it around so that I write the code, and the AI reviews it, that would be something.

                                                                                                                                                                                                                                                              Imagine if your whole team got back the time they currently spend on performing code reviews or waiting for code reviews.

                                                                                                                                                                                                                                                            • cubefox 23 minutes ago

                                                                                                                                                                                                                                                              I'm not tired, I'm afraid.

                                                                                                                                                                                                                                                              First, I'm afraid of technological unemployment.

                                                                                                                                                                                                                                                              In the past, automation meant that workers could move into non-automated jobs, if they were skilled enough. But superhuman AI seems now only few years away. It will be our last invention, it will mean total automation. There will be hardly any, if any, jobs left only a human can do.

                                                                                                                                                                                                                                                              Many countries will likely move away from a job-based market economy. But technological progress will not stop. The US, owning all the major AI labs, will leave all other societies behind. Except China perhaps. Everyone else in the world will be poor by comparison, even if they will have access to technology we can only dream of today.

                                                                                                                                                                                                                                                              Second, I'm afraid of war. An AI arms race between the US and China seems already inevitable. A hot war with superintelligent AI weapons could be disastrous for the whole biosphere.

                                                                                                                                                                                                                                                              Finally, I'm afraid that we may forever lose control to superintelligence.

                                                                                                                                                                                                                                                              In nature we rarely see less intelligent species controlling more intelligent ones. It is unclear whether we can sufficiently align superintelligence to have only humanities best interests in mind, like a parent cares for their children. Superintelligent AI might conclude that humans are no more important in the grand scheme of things than bugs are to us.

                                                                                                                                                                                                                                                              And if AI will let us live, but continue to pursue its own goals, humanity will from then on only be a small footnote in the history of intelligence. That relatively unintelligent species from the planet "Earth" that gave rise to advanced intelligence in the cosmos.

                                                                                                                                                                                                                                                              • neta1337 14 minutes ago

                                                                                                                                                                                                                                                                >But superhuman AI seems now only few years away

                                                                                                                                                                                                                                                                Seems unreasonable. You are afraid because marketing gurus like Altman made you believe that a frog that can make bigger leap than before will be able to fly.

                                                                                                                                                                                                                                                                • cubefox 2 minutes ago

                                                                                                                                                                                                                                                                  No, because we have seen massive improvements in AI over the last years, and all the evidence points to this progress continuing at a fast pace.

                                                                                                                                                                                                                                                              • gizmo 2 hours ago

                                                                                                                                                                                                                                                                AI writing is pretty bad, AI code is pretty bad, AI art is pretty bad. We all know this. But it's easy to forget how many new opportunities open up when something becomes 100x or 10000x cheaper. Things that are 10x worse but 100x cheaper are still extremely valuable. It's the relentless drive to making things cheaper, even at the expense of quality, that has made our high quality of life possible.

                                                                                                                                                                                                                                                                You can make houses by hand out of beautiful hardwood with complex joinery. Houses built by expert craftsmen are easily 10x better than the typical house built today. But what difference does that make when practically nobody can afford it? Just like nobody can afford to have a 24/7 tutor that speaks every language, can help you with your job, grammar check your work, etc.

                                                                                                                                                                                                                                                                AI slop is cheap and cheapness changes everything.

                                                                                                                                                                                                                                                                • Gigachad an hour ago

                                                                                                                                                                                                                                                                  Why do we need art to be 10000x cheaper? There was already more than enough art being produced. Now we just have infinite waves of slop drowning out everything that’s actually good.

                                                                                                                                                                                                                                                                  • senko 2 minutes ago

                                                                                                                                                                                                                                                                    [delayed]

                                                                                                                                                                                                                                                                    • lijok 13 minutes ago

                                                                                                                                                                                                                                                                      > Now we just have infinite waves of slop drowning out everything that’s actually good

                                                                                                                                                                                                                                                                      On the contrary. Slop makes the good stuff stand out.

                                                                                                                                                                                                                                                                      • gizmo an hour ago

                                                                                                                                                                                                                                                                        A toddler's crayon art doesn't end up in the Louvre, nor does AI slop. Most art is bad art and it's been this way since the dawn of humanity. For as long as we can distinguish good art from bad art we can curate and there is nothing to worry about.

                                                                                                                                                                                                                                                                        • foolofat00k 40 minutes ago

                                                                                                                                                                                                                                                                          That's just the problem -- you can't.

                                                                                                                                                                                                                                                                          Not because you can't distinguish between _one_ bad piece and _one_ good piece, but because there is so much production capacity that no human will ever be able to look at most of it.

                                                                                                                                                                                                                                                                          And it's not just the AI stuff that will suffer here, all of it goes into the same pool, and humans sample from that pool (using various methodologies). At some point the pool becomes mostly urine.

                                                                                                                                                                                                                                                                        • erwald an hour ago

                                                                                                                                                                                                                                                                          For the same reason we don't want art to be 10,000x times more expensive? Cf. status quo bias etc.

                                                                                                                                                                                                                                                                        • GaggiX an hour ago

                                                                                                                                                                                                                                                                          They are not even that bad anymore to be honest.

                                                                                                                                                                                                                                                                          • jay_kyburz an hour ago

                                                                                                                                                                                                                                                                            Information is not like physical products if you ask me. When the information is wrong, it's value flips from positive to negative. You might be paying less for progress, but you are not progressing slower, you are progressing in the wrong direction.

                                                                                                                                                                                                                                                                            • grecy an hour ago

                                                                                                                                                                                                                                                                              And it will get a lot better quickly. Ten years from now it will not be slop.

                                                                                                                                                                                                                                                                              • atoav an hour ago

                                                                                                                                                                                                                                                                                Or it will all be slop as there us no non-slop data to train on anymore

                                                                                                                                                                                                                                                                                • Applejinx 40 minutes ago

                                                                                                                                                                                                                                                                                  No, I don't think that's true. What will instead happen is there will be expert humans or teams of them, intentionally training AI brains rather than expecting wonders to occur just by turning the training loose on random hoovered-up data.

                                                                                                                                                                                                                                                                                  Brainmaker will be a valued human skill, and people will be trying to work out how to train AI to do that, in turn.

                                                                                                                                                                                                                                                                            • Validark 26 minutes ago

                                                                                                                                                                                                                                                                              One thing that I hate about the post-ChatGPT world is that people's genuine words or hand-drawn art can be classified as AI-generated and thrown away instantly. What if I wanted to talk at a conference and used somebody's AI trigger word so they instantly rejected me even if I never touched AI at all?

                                                                                                                                                                                                                                                                              This has already happened in academia where certain professors just dump(ed) their student's essays into ChatGPT and ask it if it wrote it, and fail anyone who had their essay claimed by ChatGPT. Obviously this is beyond moronic, because ChatGPT doesn't have a memory of everything it's ever done, and you can ask it for different writing styles, and some people actually write pretty similar to ChatGPT, hence the fact that ChatGPT has its signature style at all.

                                                                                                                                                                                                                                                                              I've also heard of artists having their work removed from competitions out of claims that it was auto-generated, even when they have a video of them producing it stroke by stroke. It turns out, AI is generating art based on human art, so obviously there are some people out there whose stuff looks like what AI is reproducing.

                                                                                                                                                                                                                                                                              • KaiserPro 37 minutes ago

                                                                                                                                                                                                                                                                                I too am not looking forward to industrial scale job disruption that AI brings.

                                                                                                                                                                                                                                                                                I used to work in VFX, and one day I want to go back to it. However I suspect that it'll be entirely hollowed out in 2-5 years.

                                                                                                                                                                                                                                                                                The problem is that like typesetting, typewriting or the wordprocessor, LLMs makes writing text so much faster and easier.

                                                                                                                                                                                                                                                                                The arguments about handwriting vs type writer are quite analogous to LLM vs pure hand. People who are good and fast at handwriting hated the type writer. Everyone else embraced it.

                                                                                                                                                                                                                                                                                The ancient greeks were deeply suspicious about the written word as well:

                                                                                                                                                                                                                                                                                > If men learn this[writing], it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.

                                                                                                                                                                                                                                                                                I don't like LLMs muscling in and kicking me out of things that I love. but can I put the genie back in the bottle? no. I will have to adapt.

                                                                                                                                                                                                                                                                                • eleveriven 26 minutes ago

                                                                                                                                                                                                                                                                                  Yep, there is a possibility that entire industries will be transformed, leading to uncertainty about employment

                                                                                                                                                                                                                                                                                • franciscop 2 hours ago

                                                                                                                                                                                                                                                                                  > "Yes, I realize that thinking like this and writing this make me a Neo-Luddite in your eyes."

                                                                                                                                                                                                                                                                                  Not quite, I believe (and I think anyone can) both that AI will likely change the world as we know it, AND that right now it's over-hyped to a point that it gets tiring. For me this is different from e.g. NFTs, "Big Data", etc. where I only believed they were over-hyped but saw little-to-no substance behind them.

                                                                                                                                                                                                                                                                                  • wrasee 2 hours ago

                                                                                                                                                                                                                                                                                    For me what’s important is that you are able to communicate effectively. If you use language tools, other tools or even a real personal assistant if you effectively communicate the point that ultimately is yours in the making I expect that that is ultimately is what is important and will ultimately win out.

                                                                                                                                                                                                                                                                                    Otherwise this is just about style. That’s important where personal creative expression is important, and in fairness to the article the author hits on a few good examples here. But there are a lot of times where personal expression is less important or even an impediment to what’s most important: communicating effectively.

                                                                                                                                                                                                                                                                                    The same-ness of AI-speak should also diminish as the number and breadth of the technologies mature beyond the monoculture of ChatGPT, so I’m also not too worried about that.

                                                                                                                                                                                                                                                                                    An accountant doesn’t get rubbished if they didn’t add up the numbers themselves. What’s important is that the calculation is correct. I think the same will be true for the use of LLMs as a calculator of words and meaning.

                                                                                                                                                                                                                                                                                    This comment is already too long for such a simple point. Would it have been wrong to use an LLM to make it more concise, to have saved you some of your time?

                                                                                                                                                                                                                                                                                    • t43562 2 hours ago

                                                                                                                                                                                                                                                                                      The problem is that we haven't invented AI that reads the crap that other AIs produce - so the burden is now on the reader to make sense of whatever other people lazily generate.

                                                                                                                                                                                                                                                                                      • danielbln 3 minutes ago

                                                                                                                                                                                                                                                                                        But we do. The same AI that generates can read and reduce/summarize/evaluate.

                                                                                                                                                                                                                                                                                        • Gigachad an hour ago

                                                                                                                                                                                                                                                                                          I envision a future where the internet is entirely bots talking to each other and people have just gone outside to talk face to face, the only place that’s actually real.

                                                                                                                                                                                                                                                                                      • jeswin an hour ago

                                                                                                                                                                                                                                                                                        > But I’m pretty sure I can do without all that ... test cases ...

                                                                                                                                                                                                                                                                                        Test cases?

                                                                                                                                                                                                                                                                                        I did a Show HN [1] a couple of days back for a UI library built almost entirely with AI. Gpt-o1 generated these test cases for me: https://github.com/webjsx/webjsx/tree/main/src/test - in minutes instead of days. The quality of test cases are comparable to what a human would produce.

                                                                                                                                                                                                                                                                                        75% of the code I've written in the last one year has been with AI. If you still see no value in it (especially with things like test cases), I'm afraid you haven't figured out how to use AI as a tool.

                                                                                                                                                                                                                                                                                        [1]: https://news.ycombinator.com/item?id=41644099

                                                                                                                                                                                                                                                                                        • a5c11 6 minutes ago

                                                                                                                                                                                                                                                                                          That means the code you wrote must have been pretty boring and repeatable. No way AI would produce code for, for example, proprietary hardware solutions. Try AI with something which isn't already on StackOverflow.

                                                                                                                                                                                                                                                                                          Besides, I'd rather spent hours on writing a code, than trying to explain a stupid bot what I want and reshape it later anyway.

                                                                                                                                                                                                                                                                                        • kingkongjaffa an hour ago

                                                                                                                                                                                                                                                                                          Generally, the people who seriously let genAI write for them without copious editing, were the ones who were bad writers, with poor taste anyway.

                                                                                                                                                                                                                                                                                          I use GenAI everyday as an idea generator and thought partner, but I would never simply copy and paste the output somewhere for another person to read and take seriously.

                                                                                                                                                                                                                                                                                          You have to treat these things adversarially and pick out the useful from the garbage.

                                                                                                                                                                                                                                                                                          It just lets people who created junk food, create more junk food for people who consume junk food. But there is the occasional nugget of good ideas that you can apply to your own organic human writing.

                                                                                                                                                                                                                                                                                          • ryanjshaw 2 hours ago

                                                                                                                                                                                                                                                                                            > There are no shortcuts to solving these problems, it takes time and experience to tackle them.

                                                                                                                                                                                                                                                                                            > I’ve been working in testing, with a focus on test automation, for some 18 years now.

                                                                                                                                                                                                                                                                                            OK the first thought that came to my mind reading this: sounds like a opportunity to build an AI-driven product.

                                                                                                                                                                                                                                                                                            I've been using Cursor daily. I use nothing else. It's brilliant and I'm very happy. If I could have Cursor for Well-Designed Tests I'd be extra happy.

                                                                                                                                                                                                                                                                                            • unraveller 24 minutes ago

                                                                                                                                                                                                                                                                                              If you go back to the earliest months of the audio & visual recording medium it was also called uncanny, soulless and of dubious quality compared to real life. Until it wasn't.

                                                                                                                                                                                                                                                                                              I don't care how many repulsive AI slop video clips get made or promoted for shock value. Today is day 1 and day 2 looks far better with none of the parasocial celebrity hangups we used as short-hand for a quality marker - something else will take that place.

                                                                                                                                                                                                                                                                                              • Janicc an hour ago

                                                                                                                                                                                                                                                                                                Without any sort of AI we'd probably be left with the most exciting yearly releases being 3-5% performance increases in hardware (while being 20% more expensive of course), the 100000th javascript framework and occasionally a new windows which everybody hates. People talk about how population collapse is going to mess up society, but I think complete stagnation in terms of new consumer goods/technology are just as likely to do the deed. Maybe AI will fail to improve from this point, but that's a dark future to imagine. Especially if it's for the next 50 years.

                                                                                                                                                                                                                                                                                                • siffin an hour ago

                                                                                                                                                                                                                                                                                                  Neither of those things will end society, they aren't even issues in the grand scale of things.

                                                                                                                                                                                                                                                                                                  Climate change and biosphere collapse, on the other hand, are already ending society and definitely will, no exceptions possible - unless someone is capable of performing several miracles.

                                                                                                                                                                                                                                                                                                • me551ah an hour ago

                                                                                                                                                                                                                                                                                                  People talk about 'AI' as if stackoverflow didn't exist. Re-inventing the wheel is something that programmers don't do anymore. Most of the time, someone somewhere has solved the problem that you are solving. Programming earlier used to be about finding these solutions and repurposing them for your needs. Now it has changed to asking AI, the exact question and it being a better search engine.

                                                                                                                                                                                                                                                                                                  The gains to programming speed and ability are modest at best, the only ones talking about AI replacing programmers are people who can't code. If anything AI will increase the need for more programmers, because people rarely delete code. With the help of AI, code complexity is going to go through the roof, eventually growing enough to not fit into the context windows of most models.

                                                                                                                                                                                                                                                                                                  • est an hour ago

                                                                                                                                                                                                                                                                                                    AI acts like a bad intern these days, and should be treated like one. Give it more guidance and don't make important tasks depending it.

                                                                                                                                                                                                                                                                                                    • EMM_386 27 minutes ago

                                                                                                                                                                                                                                                                                                      The one use of AI that annoys me the most is Google trying to cram it into search results.

                                                                                                                                                                                                                                                                                                      I don't want it there, I never look at it, it's wasting resources, and it's a bad user experience.

                                                                                                                                                                                                                                                                                                      I looked around a bit but couldn't see if I can disable that when logged in. I should be able to.

                                                                                                                                                                                                                                                                                                      I don't care what the AI says ... I want the search results.

                                                                                                                                                                                                                                                                                                      • willguest 2 hours ago

                                                                                                                                                                                                                                                                                                        Leave it up to a human to overgeneralize a problem and make it personal...

                                                                                                                                                                                                                                                                                                        The explosion of dull copy and generic wordsmithery is, to me, just a manifestation of the utilitarian profiteering that has elevated these models to their current standing.

                                                                                                                                                                                                                                                                                                        Let us not forget that the whole game is driven by the production of 'more' rather than 'better'. We would all rather have low-emission, high-expression tools, but that's simply not what these companies are encouraged to produce.

                                                                                                                                                                                                                                                                                                        I am tired of these incentive structures. Casting the systemic issue as a failure of those who use the tools ignores the underlying motivation and keeps us focused on the effect and not the cause, plus it feels old-fashioned.

                                                                                                                                                                                                                                                                                                        • JimmyBuckets an hour ago

                                                                                                                                                                                                                                                                                                          Can you hash out what you mean by your last paragraph a bit more? What incentive structures in particular?

                                                                                                                                                                                                                                                                                                          • jay_kyburz 5 minutes ago

                                                                                                                                                                                                                                                                                                            Not 100% sure what Will was trying to say, but what jumped into my head was perhaps that we'll see quality sites try and distinguish themselves by being short and direct.

                                                                                                                                                                                                                                                                                                            Long-winded writing will become a liability.

                                                                                                                                                                                                                                                                                                        • seydor an hour ago

                                                                                                                                                                                                                                                                                                          > same massive surge I’ve seen in the application of artificial intelligence (AI) to pretty much every problem out there

                                                                                                                                                                                                                                                                                                          I have not. Perhaps programming on the initial stages is the most 'applied' AI but there is still not a single major AI movie and no consumer robots.

                                                                                                                                                                                                                                                                                                          I think it's way too early to be tired of it

                                                                                                                                                                                                                                                                                                          • alentred an hour ago

                                                                                                                                                                                                                                                                                                            I am tired of innovations being abused. AI itself is super exciting and fascinating. But, it being abused -- to generate content to drive more ad-clicks, or the "Now better with AI" promise on every landing page, etc. etc. -- that I am tired of, yes.

                                                                                                                                                                                                                                                                                                            • ricardobayes 2 hours ago

                                                                                                                                                                                                                                                                                                              I personally don't see AI as the new Internet, as some claim it to be. I see it more as the new 3D-printing.

                                                                                                                                                                                                                                                                                                              • eleveriven 2 hours ago

                                                                                                                                                                                                                                                                                                                AI is a tool, and like any tool, it's only as good as how we choose to use it.

                                                                                                                                                                                                                                                                                                                • thewarrior an hour ago

                                                                                                                                                                                                                                                                                                                  I’m tired of farming - Someone in 5000 BC

                                                                                                                                                                                                                                                                                                                  I’m tired of electricity - Someone in 1905

                                                                                                                                                                                                                                                                                                                  I’m tired of consumer apps - Someone in 2020

                                                                                                                                                                                                                                                                                                                  The revolution will happen regardless. If you participate you can shape it in the direction you believe in.

                                                                                                                                                                                                                                                                                                                  AI is the most innovative thing to happen in software in a long time.

                                                                                                                                                                                                                                                                                                                  And personally AI is FUN. It sparks joy to code using AI. I don’t need anyone else’s opinion I’m having a blast. It’s a bit like rails for me in that sense.

                                                                                                                                                                                                                                                                                                                  This is HACKER news. We do things because it’s fun.

                                                                                                                                                                                                                                                                                                                  I can tackle problems outside of my comfort zone and make it happen.

                                                                                                                                                                                                                                                                                                                  If all you want to do is ship more 2020s era B2B SaaS till kingdom come no one is stopping you :P

                                                                                                                                                                                                                                                                                                                  • StefanWestfal an hour ago

                                                                                                                                                                                                                                                                                                                    At no point does the author suggest that AI is not going to happen or that it is not useful. He expresses frustration with marketing, false promises, pitching of superficial solutions for deep problems, and the usage of AI to replace meaningful human interactions. In short, the text is not about the technology itself.

                                                                                                                                                                                                                                                                                                                    • thewarrior an hour ago

                                                                                                                                                                                                                                                                                                                      That’s always the case with any new technology. Tech isn’t going to make everyone happy or achieve world peace.

                                                                                                                                                                                                                                                                                                                    • LunaSea an hour ago

                                                                                                                                                                                                                                                                                                                      > The revolution will happen regardless. If you participate you can shape it in the direction you believe in

                                                                                                                                                                                                                                                                                                                      This is incredibly naïve. You don't have a choice.

                                                                                                                                                                                                                                                                                                                    • snowram 2 hours ago

                                                                                                                                                                                                                                                                                                                      I quite like some parts of AI. Ray reconstruction and supersampling methods have been getting incredible and I can now play games with twice the frames per seconds. On the scietific side, meteorological predictions and protein folding have made formidable progresses thanks to it. Too bad this isn't the side of AI that is in the spotlight.

                                                                                                                                                                                                                                                                                                                      • zone411 an hour ago

                                                                                                                                                                                                                                                                                                                        The author is in for a rough time in the coming years, I'm afraid. We've barely scratched the surface with AI's integration into everything. None of the major voice assistants even have proper language models yet, and ChatGPT only just introduced more natural, low-latency voices a few days ago. Software development is going to be massively impacted.

                                                                                                                                                                                                                                                                                                                        • BoGoToTo an hour ago

                                                                                                                                                                                                                                                                                                                          My worry is what happens once large segments of the population become unemployable.

                                                                                                                                                                                                                                                                                                                          • anonyfox 6 minutes ago

                                                                                                                                                                                                                                                                                                                            You should really have a look at Marx. He literally predicted what will happen when we reach the state of "let machines do all work", and also how this is exactly the way that finally implodes capitalism as a concept. The major problem is he believed the industrial revolution will automate everything to such an extend, which it didn't, but here we are with a reasonable chance that AI will do the trick finally.

                                                                                                                                                                                                                                                                                                                        • danjl 17 minutes ago

                                                                                                                                                                                                                                                                                                                          One of the pernicious aspects of using AI is the feeling it gives you that you have done all work without any of the effort. But the time of takes to digest and summarize an article as a human requires a deep injestion of the concepts. The process is what helps you understand. The AI summary might be better, and didn't take any time, but you don't understand any of it since you didn't do the work. It's similar to the effect of telling people you will do a task, which gives your brain the same endorphins as actually doing the task, resulting in a lower chance that the task ever gets done.

                                                                                                                                                                                                                                                                                                                          • thih9 an hour ago

                                                                                                                                                                                                                                                                                                                            Doesn’t that kind of change follow the overall trend?

                                                                                                                                                                                                                                                                                                                            We continuously shift to higher level abstractions, trading reliability for accessibility. We went from binary to assembly, then to garbage collection and to using electron almost everywhere; ai seems yet another step.

                                                                                                                                                                                                                                                                                                                            • pech0rin 2 hours ago

                                                                                                                                                                                                                                                                                                                              As an aside its really interesting how the human brain can so easily read an AI essay and realize its AI. You would think that with the vast corpus these models were trained on there would be a more human sounding voice.

                                                                                                                                                                                                                                                                                                                              Maybe it's overfitting or maybe just the way models work under the hood but any time I see AI written stuff on twitter, reddit, linkedin its so obvious its almost disgusting.

                                                                                                                                                                                                                                                                                                                              I guess its just the brain being good at pattern matching, but it's crazy how fast we have adapted to recognize this.

                                                                                                                                                                                                                                                                                                                              • Jordan-117 2 hours ago

                                                                                                                                                                                                                                                                                                                                It's the RLHF training to make them squeaky clean and preternaturally helpful. Pretty sure without those filters and with the right fine-tuning you could have it reliably clone any writing style.

                                                                                                                                                                                                                                                                                                                                • llm_trw 2 hours ago

                                                                                                                                                                                                                                                                                                                                  One only need to go to the dirtier corners of the llm forums to find some _very_ interesting voices there.

                                                                                                                                                                                                                                                                                                                                  To quote someone from a tor bb board: my chat history is illegal in 142 countries and carries the death penalty in 9.

                                                                                                                                                                                                                                                                                                                                  • bamboozled 2 hours ago

                                                                                                                                                                                                                                                                                                                                    But without the RLHF aren’t they less useful “products”?

                                                                                                                                                                                                                                                                                                                                  • infinitifall 2 hours ago

                                                                                                                                                                                                                                                                                                                                    Classic survivorship bias. You simply don't recognise the good ones.

                                                                                                                                                                                                                                                                                                                                    • carlmr 2 hours ago

                                                                                                                                                                                                                                                                                                                                      >Maybe it's overfitting or maybe just the way models work under the hood

                                                                                                                                                                                                                                                                                                                                      It feels more like averaging or finding the median to me. The writing style is just very unobtrusive. Like the average TOEFL/GRE/SAT essay style.

                                                                                                                                                                                                                                                                                                                                      Maybe that's just what most of the material looks like.

                                                                                                                                                                                                                                                                                                                                      • Al-Khwarizmi 2 hours ago

                                                                                                                                                                                                                                                                                                                                        Everyone I know claims to be able to recognize AI text, but every paper I've seen where that ability is A/B tested says that humans are pretty bad at this.

                                                                                                                                                                                                                                                                                                                                        • chmod775 2 hours ago

                                                                                                                                                                                                                                                                                                                                          These models are not trained to act like a single human in a conversation, they're trained to be every participant and their average.

                                                                                                                                                                                                                                                                                                                                          Every instance of a human choosing not to engage or speak about something - because they didn't want to or are just clueless about the topic, is not part of their training data. They're only trained on active participants.

                                                                                                                                                                                                                                                                                                                                          Of course they'll never seem like a singular human with limited experiences and interests.

                                                                                                                                                                                                                                                                                                                                          • izacus an hour ago

                                                                                                                                                                                                                                                                                                                                            The output of those AIs is akin to products and software designed for the "average" user - deep inside uncanny valley, saying nothing specifically, having no specific style, conveying no emotion and nothing to latch on to.

                                                                                                                                                                                                                                                                                                                                            It's the perfect embodiment of HR/corpspeak which I think its so triggering for us (ex) corpo drones.

                                                                                                                                                                                                                                                                                                                                          • amelius 2 hours ago

                                                                                                                                                                                                                                                                                                                                            Maybe because the human brain gets tired and cannot write at the same quality level all the time, whereas an AI can.

                                                                                                                                                                                                                                                                                                                                            Or maybe it's because of the corpus of data that it was trained on.

                                                                                                                                                                                                                                                                                                                                            Or perhaps because AI is still bad at any kind of humor.

                                                                                                                                                                                                                                                                                                                                          • chalcolithic an hour ago

                                                                                                                                                                                                                                                                                                                                            In Soviet planet Earth AI gets tired of you. That's what I expect future to be like, anyways

                                                                                                                                                                                                                                                                                                                                            • CodeCompost 2 hours ago

                                                                                                                                                                                                                                                                                                                                              We're all tired of it, but to ignore it is to be unemployed.

                                                                                                                                                                                                                                                                                                                                              • kunley 26 minutes ago

                                                                                                                                                                                                                                                                                                                                                With all due respect, that's seems like a cliche, repeated maybe because others repeat that already.

                                                                                                                                                                                                                                                                                                                                                Working in IT operations (mostly), I haven't seen literally any case of someone's job in danger because of not using "AI".

                                                                                                                                                                                                                                                                                                                                                • sph 2 hours ago

                                                                                                                                                                                                                                                                                                                                                  Depends on which point of your career. With 18 years of experience, consulting for tech companies, I can afford to be tired of AI. I don't get paid to write boilerplate code, and avoiding anyone knocking at the door with yet another great AI-powered idea makes commercial sense, just like I have ignored everyone wanting to build the next blockchain product 5 years ago, with no major loss of income.

                                                                                                                                                                                                                                                                                                                                                  Also, running a bootstrapped business, I have bigger fishes to fry than playing mentor to Copilot to write a React component or generating bullshit copy for my website.

                                                                                                                                                                                                                                                                                                                                                  I'm not sure we need more FUD saying that the choice is between AI or unemployment.

                                                                                                                                                                                                                                                                                                                                                  • Al-Khwarizmi 2 hours ago

                                                                                                                                                                                                                                                                                                                                                    I find comparisons between AI and blockchain very misleading.

                                                                                                                                                                                                                                                                                                                                                    Blockchain is almost entirely useless in practice. I have no reason to disdain it, in fact I was active in crypto around 10-12 years ago when I was younger and more excited about tech than now, and I had fun. But the fact is that the utility that it has brought to most of society is essentially to have some more speculative assets to gamble on, at ludicrous energy and emissions costs.

                                                                                                                                                                                                                                                                                                                                                    Generative AI, on the other hand, is something I'm already using almost every day and it's saving me work. There may be a bubble but it will be more like the dotcom bubble (i.e., not because the tech is useless, but because many companies jump to make quick bucks without even knowing much about the tech).

                                                                                                                                                                                                                                                                                                                                                    • Applejinx an hour ago

                                                                                                                                                                                                                                                                                                                                                      I mean, to be selfish at apparently a dicey point in history, go ahead and FUD and get people to believe this.

                                                                                                                                                                                                                                                                                                                                                      None of my useful work is AI-able, and some of the useful work is towards being able to stand apart from what is obviously generated drivel. Sounds like the previous poster with the bootstrapped business is in a similar position.

                                                                                                                                                                                                                                                                                                                                                      Apparently AI is destroying my potential competition. That seems unfair, but I didn't tell 'em to make such an awful mistake. How loudly am I obliged to go 'stop, don't, come back'?

                                                                                                                                                                                                                                                                                                                                                    • sunaookami 2 hours ago

                                                                                                                                                                                                                                                                                                                                                      Speak for yourself.

                                                                                                                                                                                                                                                                                                                                                      • snickerer 2 hours ago

                                                                                                                                                                                                                                                                                                                                                        How all those cab drivers who ignore autonomous driving are now unemployed?

                                                                                                                                                                                                                                                                                                                                                        • anonzzzies 2 hours ago

                                                                                                                                                                                                                                                                                                                                                          When it's for sale everywhere (I cannot buy one) and people trust it, all cab drivers will be gone. Unemployed will depend on the resilience, but unlike cars replacing coach drivers, there is not really a similar thing a cab driver can pivot to.

                                                                                                                                                                                                                                                                                                                                                          • snickerer an hour ago

                                                                                                                                                                                                                                                                                                                                                            Yes, we can imagine a future where all cab drivers are unemployed, replaced by autonomous driving. However, we don't know when this will happen, because autonomous driving is a much harder problem than the hype from a few years ago suggested. There isn't even proof that autonomous driving will ever be able to fully replace human drivers.

                                                                                                                                                                                                                                                                                                                                                        • kasperni 2 hours ago

                                                                                                                                                                                                                                                                                                                                                          > We're all tired of it,

                                                                                                                                                                                                                                                                                                                                                          You’re feeling tired of AI, but let’s delve deeper into that sentiment for a moment. AI isn’t just a passing trend—it’s a multifaceted tool that continues to elevate the way we engage with technology, knowledge, and even each other. By harnessing the capabilities of artificial intelligence, we allow ourselves to explore new frontiers of creativity, problem-solving, and efficiency.

                                                                                                                                                                                                                                                                                                                                                          The interplay between human intuition and AI’s data-driven insights creates a dynamic that enriches both. Rather than feeling overwhelmed by it, imagine the opportunities—how AI can shoulder the burdens of mundane tasks, freeing you to focus on the more nuanced, human elements of life.

                                                                                                                                                                                                                                                                                                                                                          /s

                                                                                                                                                                                                                                                                                                                                                        • buddhistdude 2 hours ago

                                                                                                                                                                                                                                                                                                                                                          some of the activities that we're involved in are not limited in complexity, for example driving a car. you can have a huge amount of experience in driving a car but will still face new situations.

                                                                                                                                                                                                                                                                                                                                                          the things that most knowledge workers are working on are limited problems and it is just a matter of time until the machine will reach that level, then our employment will end.

                                                                                                                                                                                                                                                                                                                                                          edit: also that doesn't have to be AGI. it just needs to be good enough for the problem.

                                                                                                                                                                                                                                                                                                                                                          • ETH_start an hour ago

                                                                                                                                                                                                                                                                                                                                                            That's fine he can stick with his horse and buggy. Cognition is undergoing its transition to automobiles.

                                                                                                                                                                                                                                                                                                                                                            • sirsinsalot an hour ago

                                                                                                                                                                                                                                                                                                                                                              If humans have a talent for anything, it is mechanising the pollution of the things we need most.

                                                                                                                                                                                                                                                                                                                                                              The earth. Information. Culture. Knowledge.

                                                                                                                                                                                                                                                                                                                                                              • Meniceses 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                I love AI.

                                                                                                                                                                                                                                                                                                                                                                In comparision to a lot of other technologies, we actually have jumps in quality left and right, great demos, new things which are really helpful.

                                                                                                                                                                                                                                                                                                                                                                Its fun to watch the AI news because there is something relevant new happening.

                                                                                                                                                                                                                                                                                                                                                                I'm worried regarding the impact of AI but this is a billion times better than the last 10 years which was basically just cryptobros, nfts, blockchain shit which is basically just fraud.

                                                                                                                                                                                                                                                                                                                                                                Its not just some GenAI stuff, we talk about blind people getting better help through image analysis, we talk about alpha fold, LLMs being impressive as hell, the research currently happening.

                                                                                                                                                                                                                                                                                                                                                                And yes i also already see benefits in my job and in my startup.

                                                                                                                                                                                                                                                                                                                                                                • bamboozled 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                  I’m truly asking in good faith here because I don’t know but what has alpha fold actually helped us achieve ?

                                                                                                                                                                                                                                                                                                                                                                  • Meniceses 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                    It allows us to speed up medical research.

                                                                                                                                                                                                                                                                                                                                                                    • bamboozled 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                      In what field specifically and how ?

                                                                                                                                                                                                                                                                                                                                                                      • scotty79 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                        Are you asking what field of science or what industry is interested in predicting how proteins fold?

                                                                                                                                                                                                                                                                                                                                                                        Biotechnology and medicine probably.

                                                                                                                                                                                                                                                                                                                                                                        Pipeline from science to application sometimes takes decades, but I'm sure you can find news of some advancements enabled by finding out short, easy to synthesize proteins that fit particular receptor to block it or that process some simplified enzymes that still process some chemicals of interest more efficiently than natural ones. Finding them would be way harde without ability to predict how a sequence of amino-acids will fold.

                                                                                                                                                                                                                                                                                                                                                                        You'd need to actually try to manufacture them then look at them closely.

                                                                                                                                                                                                                                                                                                                                                                        First thing that came to my mind as a possible application is designing monoclonal antibodies. Here's some paper about something relating to alpha fold and antibodies:

                                                                                                                                                                                                                                                                                                                                                                        https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10349958/

                                                                                                                                                                                                                                                                                                                                                                        • RivieraKid 41 minutes ago

                                                                                                                                                                                                                                                                                                                                                                          I guess he's asking for specific examples of AlphaFold leading to some tangible real-world benefit.

                                                                                                                                                                                                                                                                                                                                                                • scotty79 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                  AI was just trained so far to generate corporate bs speak in a corporate bs format. That's why it's tiring. More unique touch in communication will come later as fine tunings and loras (if possible) of those models are shared.

                                                                                                                                                                                                                                                                                                                                                                  • fallingknife 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                    I'm not. I think it's awesome and I can't wait to see what comes out next. And I'm completely OK with all of my work being used to train models. Bunch of luddites and sour grapes around here on HN these days.

                                                                                                                                                                                                                                                                                                                                                                    • elpocko 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                      Same here! Amazing stuff that I have waited for my entire life, and I won't let luddite haters ruin it for me. Their impotent rage is tiring but in the end it's just one more thing you have to ignore.

                                                                                                                                                                                                                                                                                                                                                                      • yannis an hour ago

                                                                                                                                                                                                                                                                                                                                                                        Absolutely amazing stuff. I am now three scores and ten in my life time, seen a lot of changes from slide rules->very fast to calculators->very fast to pcs, from dot matrix printers to lazer jets and dozens of other things. Wish AI was available when I was doing my PhD. If you know its limitations it can be very useful. At present I occasionally use it to translate references from wikipedia articles to bibtex format. It is very good at this, I only need to fix a few minor errors, letting me focus to the core of what I am doing. But human nature always resists change, especially if it leads to the unknown. I must admit that I think AI will bring negative consequences as it will be misused by politicians and the military, they need to be "regulated" not the AI.

                                                                                                                                                                                                                                                                                                                                                                        • fallingknife an hour ago

                                                                                                                                                                                                                                                                                                                                                                          Yeah, they made something that passes a Turing test, and people on HN of all places hate it? What happened to this place? It's like the number one thing people hate around here now is another man's success.

                                                                                                                                                                                                                                                                                                                                                                          I won't ignore them. I'll continue to loudly disagree with the losers and proudly collect downvotes from them knowing I got under their skin.

                                                                                                                                                                                                                                                                                                                                                                          • Applejinx an hour ago

                                                                                                                                                                                                                                                                                                                                                                            Eliza effectively passed Turing tests. I think you gotta do a little better than that, and 'ha ha I made you mad' isn't actually the best defense of your position.

                                                                                                                                                                                                                                                                                                                                                                        • Kiro an hour ago

                                                                                                                                                                                                                                                                                                                                                                          You're getting downvoted, but I agree with your last sentence — and not just about AI. The amount of negativity here regarding almost everything is appalling. Maybe it's rose-tinted nostalgia but I don't remember it being like this a few years ago.

                                                                                                                                                                                                                                                                                                                                                                          • CaptainFever 14 minutes ago

                                                                                                                                                                                                                                                                                                                                                                            Hacker News used to be nicknamed Hater News, as I recall.

                                                                                                                                                                                                                                                                                                                                                                        • littlestymaar 2 hours ago

                                                                                                                                                                                                                                                                                                                                                                          It's not AI you hate, it's Capitalism.

                                                                                                                                                                                                                                                                                                                                                                          • thenaturalist an hour ago

                                                                                                                                                                                                                                                                                                                                                                            Say what you want about income and asset inequality, but capitalism has done more to lift hundreds of millions of people out of poverty over the past 50 years than any other religion, aid programme or whatever else.

                                                                                                                                                                                                                                                                                                                                                                            I think it's very important and fair to be critical about how we as a society implement capitalism, but such broad generalization misses the mark immensely.

                                                                                                                                                                                                                                                                                                                                                                            Talk to anyone who grew up in a Communist country in the 2nd half of the 20th century if you want to validate that sentiment.

                                                                                                                                                                                                                                                                                                                                                                            • BoGoToTo an hour ago

                                                                                                                                                                                                                                                                                                                                                                              Ok, but let's take this to the logical conclusion that at some point there will be models which displace a large segment of the workforce. How does capitalism even function then?

                                                                                                                                                                                                                                                                                                                                                                              • littlestymaar 29 minutes ago

                                                                                                                                                                                                                                                                                                                                                                                > but capitalism has done more to lift hundreds of millions of people out of poverty over the past 50 years than any other religion, aid programme or whatever else.

                                                                                                                                                                                                                                                                                                                                                                                Technology did what you ascribe to Capitalism. Most of the time thanks to state intervention, and the weaker the state, the weaker the growth (see how Asia overperformed everybody else now that laissez-faire policies are mainstream in the West).

                                                                                                                                                                                                                                                                                                                                                                                > Talk to anyone who grew up in a Communist country in the 2nd half of the 20th century if you want to validate that sentiment.

                                                                                                                                                                                                                                                                                                                                                                                The fact that one alternative to Capitalism was a failure doesn't mean Capitalism isn't bad.