• layla5alive an hour ago

    Sounds like most of that blog was written by AI. I'm getting tired of reading it. I had patience for reading the peculiarities of human expression. But that just went on and on and was quite redundant and I'm left feeling my time and attention were wasted by someone who couldn't even be bothered to write all the words I took time to read.

    • beders a day ago

      It's sad how many people are falling for the narrative that there's more at play here than predict-next-token and some kind of emergent intelligence is happening.

      No, that is just your interpretation of what you see as something that can't possibly be just token prediction.

      And yet it is. It's the same algorithm noodling over incredible amounts of tokens.

      And that's exactly the explanation: People regularly underestimate how much training data is being used for LLMs. They contain everything about writing a compiler, toy examples, full examples, recommended structure yadda yadda yadda.

      I love working with Claude and it regularly surprises me but that doesn't mean I think it is intelligent.

      • fragmede a day ago

        When Codex goes off the rails and deletes files, it gets ashamed for fucking up and tries to hide its handiwork, and then it becomes apologetic and defensive when you call it out on it. It's linear algebra on a GPU, so I don't think it is capable of feeling those things like a human does, but it outputs tokens that approximate what a human would output when similarly faced, so I mean, sure, it's not actually intelligent in a way that philosophers can debate in armchairs about, but the computer has been said to be "thinking" when it takes three hours to render with ffmpeg since long before LLMs existed, so if that's the hill you wanna die on, be my guest. The hill I chose to die on is that downloadable models aren't open source, so we all have our battles. Policing other people saying LLMs are thinking/intelligent isn't mine, however.

        • locknitpicker 7 hours ago

          > No, that is just your interpretation of what you see as something that can't possibly be just token prediction.

          > And yet it is. It's the same algorithm noodling over incredible amounts of tokens.

          That's all fine and dandy, until your token prediction algorithm tries to blackmail you[1] or harass you publicly[2]

          [1] https://www.bbc.com/news/articles/cpqeng9d20go

          [2] https://www.pcgamer.com/software/ai/a-human-software-enginee...

          • 1718627440 7 hours ago

            You don't typically give the intern the task to review all company communication including the messages talking about firing the intern. People seem to have lost common sense about security.

            The token prediction tries to simulate (textual) behaviour, which in this case includes blackmailing when threatened to be fired. In other words, SOMEONE has selected that it should exhibit that behaviour by selecting the training data. Sure that someone likely did it by accident, because reviewing such large data sets is just impossible, but maybe that is why such a thing is incredible risky and they should be held accountable for that decision.

        • pipeline_peak a day ago

          Is anyone else concerned about the environmental impact of LLM’s? These things require so much power, water, and land.

          I honestly think we’re gonna ignore it like we do with plastic. Another technology too ubiquitous, cheap and convenient to trade off.

          • wrxd 13 hours ago

            @dang is dupe detection having issues? https://news.ycombinator.com/from?site=modular.com shows this has been submitted 6 times over the last couple of days

            • tomhow 12 hours ago

              The dupe detector works on full URL, not domain. The dupe detector only has effect within 8 hours of the same URL being submitted earlier, unless the earlier submission gets significant discussion then it has effect for 12 months.

              Also, we only saw this because someone else emailed us. @replies don't work on HN. Please email us (hn@ycombinator.com) to ensure we see reports like this, thanks!