• robmerki 4 days ago

    Unfortunately there is no way to combat this, and it seems like the end of the internet we once knew. Even with a “proof of human” technology, people could still just paste whatever AI-generated text they wanted, under their “real” account.

    This has likely been going on since the first ChatGPT was released.

    • ivape 4 days ago

      More of the same? Reddit's genesis included fake accounts and content. I don't doubt upvotes and the frontpage is fully curated:

      https://economictimes.indiatimes.com/magazines/panache/reddi...

      We all have an expectation that these message boards are like the forums of the 2000s, but that's just not true and hasn't been for a long time. We will never see that internet again it seems, because AI was the atomic bomb on all this astroturfing and engineered content. Educating people away from these synthetic forums is appearing near impossible.

      • gnabgib 4 days ago

        Discussion (212 points, 1 day ago, 144 comments) https://news.ycombinator.com/item?id=43806940

        • dlivingston 3 days ago

          I'm in favor of the university's project and think many more projects like this are needed.

          The internet is swarmed with bots. I would estimate something like 25% of all Reddit, X/Twitter, YouTube and Facebook comments come from bots. Perhaps higher.

          It's not like r/CMV was some purely human oasis in the Reddit bot-sea.

          It's a tough pill to swallow, but the internet is dead as far as open forum communications go. We need to get a solid understand of the scope, scale, and solutions to this problem -- because, trust you me, it will be exploited if not.

          • strathmeyer 4 days ago

            > The idea that my opinion on an issue could have been influenced by a fake personal anecdote invented by a research bot is abhorrent to me.

            Then stop basing your opinion on issues on personal anecdotes from complete strangers. This is nothing new.

            • bitshiftfaced 4 days ago

              The subreddit has question-askers give feedback to whether their view was changed. The askers are aware of how their response might appear publicly. This makes me wonder if "appeal to identity" is especially effective, at least superficially if not actually. The fine-tuning might've been reacting to this.

              • knowitnone 4 days ago

                "This project yields important insights, and the risks (e.g. trauma etc.) are minimal." They can't possibly measure the insights or claim that the trauma is minimal.

                • montroser 4 days ago

                  > I think the reason I find this so upsetting is that, despite the risk of bots, I like to engage in discussions on the internet with people in good faith. The idea that my opinion on an issue could have been influenced by a fake personal anecdote invented by a research bot is abhorrent to me.

                  I like Simon's musings in general, but are we not way past this point already? It is completely and totally inevitable that if you try to engage in discussions on the internet, you will be influenced by fake personal anecdotes invented by LLMs. The only difference here is they eventually disclosed it, but aren't various state and political actors already doing this in spades, undisclosed?