« BackWhy I'm leaving Medium: AI policymedium.comSubmitted by vouaobrasil 4 hours ago
  • ziddoap 3 hours ago

    >Unlike many, I don’t sit on the fence. I hate it with a passion. I think it’s an affront to humanity and will ultimately cause our downfall. If an article uses AI, even just for header images, I don’t read it.

    This seems just as silly of a take as the fervent supporters of sticking AI everywhere.

    AI helps in some areas (e.g. assisting medical diagnoses) and is shitty in others (e.g. spam content).

    I wonder if the author would refuse to receive a diagnosis from a doctor if the diagnosis was AI-assisted?

    • happytoexplain 3 hours ago

      I'm somewhere between the author and a true-neutral stance. Something can be "an affront to humanity [that] will ultimately cause our downfall" and also "help in some areas". Those aren't mutually exclusive, logically, despite sounding that way.

      • pxc 3 hours ago

        Indeed the latter is kind of a precursor to the former: AI has to be helpful enough for key use cases in order to become used in whatever critical ways cause it to serve as humanity's downfall.

        Setting aside the apocalyptic prediction in particular, this sort of thing is already the case with many things that have become irritating or pernicious at scale, like automated phone systems and SEO. The latter of those examples already includes AI more and more. If AI weren't a helpful device for reducing writing costs, it wouldn't be playing a role in filling the web with worthless SEO-spam garbage.

        • JohnFen 3 hours ago

          This.

          The question isn't whether or not there are benefits to be had from the technology. The question is whether or not the cost/benefit ratio is favorable. It's not clear to me what the answer to that actually is.

          • tempfile 3 hours ago

            It's coherent (just) for something to be an affront to humanity and helpful for some tasks. It's emphatically not coherent for you to consider something an affront to humanity and to condone using it.

            • happytoexplain 3 hours ago

              I disagree. Generically speaking that's true, but specifically speaking, AI has a wide variety of use cases, and they do not all, from the perspective of a normal user, appear to help spread what they perceive to be the bad parts.

              Obviously if we could somehow separate the good from the bad, we should, but it seems like an intractable problem.

              This dichotomy (polychotomy?) is also aggravated by the fact that when people talk about AI, they are probably 99% referring to generation of prose, art, and code (as opposed to e.g. medical diagnosis).

              • pxc 3 hours ago

                > It's emphatically not coherent for you to consider something an affront to humanity and to condone using it.

                A lot of people (particularly Americans) seem to think of nuclear bombs in exactly this way. They typically see the 'some task' to be of extremely outsized necessity and importance, and I think they'd say that makes the two ideas cohere.

                • tempfile 3 hours ago

                  Then those people do not seriously think nuclear weapons are an affront to humanity. It doesn't just mean "very bad". It's against human dignity. You shouldn't do it, ever. Killing isn't an affront to humanity, but torture is, for example.

                  If you think using nuclear weapons can be justified, then you don't think they're evil, you just think they're dangerous.

                  • Vecr an hour ago

                    If you claim something is an "affront to humanity" it clearly induces doubt in your opponent that you would use it.

                    Not a good negotiating position where deterrence is concerned.

                    • pxc 2 hours ago

                      I absolutely agree about nukes. The US' use of nukes was barbaric imperialism and when USians' say it was justified I think that's a transparent post-hoc rationalization. And I agree totally that the position I outlined is deeply hypocritical.

                      I'm less sure that it's an outright logical contradiction, but I'm inclined to agree with you there as well.

                  • Vecr an hour ago

                    Not really, many AI doomer types use AI for various tasks. Obviously they need to for research, but even outside that they say they get benefit.

                • jaredcwhite an hour ago

                  I'm 100% in agreement with the author, and to answer your question: if I found out my doctor had based their diagnosis on output from an AI, I'd find another doctor.

                  • undefined an hour ago
                    [deleted]
                    • ziddoap an hour ago

                      I find that wild, but to each their own!

                    • rsynnott 2 hours ago

                      > AI helps in some areas (e.g. assisting medical diagnoses)

                      Increasingly (and I think certainly in the above case) AI is used as shorthand for genAI (which is unsurprising, as up until recently most AI-ish things got called ML anyway). I certainly hope no-one's using LLMs for medical diagnoses...

                      > I wonder if the author would refuse to receive a diagnosis from a doctor if the diagnosis was AI-assisted?

                      It really depends on what you mean by 'AI-assisted', IMO. If you mean that the doctor had asked a chatbot, I'd very much be looking for a second opinion. What sort of AI assistance did you have in mind.

                      • ziddoap 41 minutes ago

                        >What sort of AI assistance did you have in mind.

                        The designed-for-medicine kind, not the ChatGPT kind.

                      • JohnFen 3 hours ago

                        > I wonder if the author would refuse to receive a diagnosis from a doctor if the diagnosis was AI-assisted?

                        I can't speak for the author, obviously, but personally my answer would be "it depends". If the diagnosis came from a doctor who happened to use AI as one of their tools, I'm OK with that (as long as it was a locally-hosted AI, but that's a different issue). If the diagnosis came from AI without a substantial amount of analysis from a doctor, then I'd absolutely reject that.

                        • ziddoap 3 hours ago

                          That sounds like a completely reasonable approach!

                          What the author has written, in my opinion, is unreasonable because it is absolute in its hatred for AI.

                        • yugffred 3 hours ago

                          Someone has to stop giving these sloppyjoes the clicks, because then they’ll stop making slop.

                          They are uninterested in the content they just want the clicks.

                          If a doctor just slops a diagnosis that results in malpractice via “hallucination” then people will stop going to them.

                          Will you continue to use that dangerous doctor just for the sake of your misguided principle?

                          • ziddoap 3 hours ago

                            >Will you continue to use that dangerous doctor just for the sake of your misguided principle?

                            What a weird interpretation of what I said.

                            Obviously I mean a competent doctor that uses AI-assistance in a responsible manner, not a dangerous doctor that commits malpractice.

                            • yugffred an hour ago

                              It’s helpful to examine extremes to see where your logic breaks down.

                              You are assuming a “competent” blogger even in the face of a slop banner.

                              How will you know the doctor is competent? All you know is they give sloppy diagnoses.

                              How will the op know the blog writer is competent?

                              Their metric is “if they use slop at the top of their blog it’s a waste of their time to investigate further”.

                          • nonrandomstring 3 hours ago

                            I thought so too. But then I changed my mind [0]. Especially after asking some other people what they thought. In a nutshell it's about association. People see an AI image and start thinking... hey maybe the prose and video had a little 'help' too. Still haven't got around to replacing all the generative thumbnails. Once AI stuff get's into your content it's like pollution and a royal PITA to sieve out.

                            [0] https://cybershow.uk/blog/posts/nomoreai

                            • packetlost 3 hours ago

                              > AI helps in some areas (e.g. assisting medical diagnoses)

                              Uhhh, I don't know about that one. Have we seen studies that show real predictive capabilities? Anecdotal evidence is not helpful, and it seems rather risky to depend on something that has not been thoroughly vetted when it comes to people's lives.

                              • ziddoap 3 hours ago

                                >In 2020, Zhang’s team developed an AI imaging-assisted diagnosis system for COVID-19 pneumonia and published in Cell. Based on the 500,000 copies of CT images that the team studied, the system was able to distinguish COVID-19 from other viral pneumonias within 20 seconds, with an accuracy rate of more than 90%.

                                https://www.nature.com/articles/d42473-022-00035-y?error=coo...

                                >AI improves the lives of patients, physicians, and hospital managers by doing activities usually performed by people but in a fraction of the time and the expense. [...] Not only that, AI assists physicians in detecting diseases by utilizing complicated algorithms, hundreds of biomarkers, imaging findings from millions of patients, aggregated published clinical studies, and thousands of physicians’ notes to improve the accuracy of diagnosis.

                                https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8754556/

                                • rsynnott an hour ago

                                  I mean if I was given a COVID diagnosis or non-diagnosis based on the above, I'd be reporting the doctor, because it's far worse accuracy than generally available methods (even antigen methods)... Even if visual methods were the only means of diagnosing COVID (as was the case for the first few weeks after the virus emerged), sorry, but I'd be asking for an actual radiologist. Radiologists were able to do this with better accuracy.

                                  Really, the only way this could possibly be useful were in a hypothetical case where only visual methods were available and there weren't enough radiologists, and even then it wouldn't be _very_ useful, given the high failure rate.

                                  > Not only that, AI assists physicians in detecting diseases by utilizing complicated algorithms, hundreds of biomarkers, imaging findings from millions of patients, aggregated published clinical studies, and thousands of physicians’ notes to improve the accuracy of diagnosis.

                                  This sounds very fuzzy. Can you appoint to specific approved applications, or is this all hypothetical?

                                  • ziddoap an hour ago

                                    >Can you appoint to specific approved applications, or is this all hypothetical?

                                    One of those links is a systemic literature review of a bunch of other existing literature.

                                    If that is not enough for you, that is totally understandable! But if I'm honest here, I don't really care enough to go digging even further and trying to find additional studies or approved applications from your preferred country, with an ultimate goal of convincing you.

                                    I would be fine if my current doctor used AI to assist in their diagnosis of me. You don't have to be, it's completely your right to choose that.

                                • JTyQZSnP3cQGa8B 2 hours ago

                                  In diagnosis there is no prediction. The AI can find a pattern somewhere and that’s it. It’s actually very efficient in some fields (like recognizing stuff in a scanned body) but doctors always have the final say. It’s like a hint, and it should stay like that.

                                • wasteduniverse 3 hours ago

                                  [dead]

                                • McBainiel 3 hours ago

                                  I've been thinking a lot lately about why I don't like AI and ultimately I think it's because of its tone. I don't know why OpenAI made ChatGPT so wordy and almost unctuous.

                                  I've realised I don't actually care about people using it for programming or brainstorming or whatever. It's just I feel so inslulted when I read something that is in the default AI voice.

                                  So I don't know that I agree entirely with the writer of the piece but I get where he's coming from. AI writing is unpleasant to read. And I hope Medium reverses their decision.

                                  • zmgsabst 3 hours ago

                                    I enjoy its tone when writing long form content — eg, this start of a book project. Certainly more than many long form blog articles, eg, on Medium.

                                    Tastes vary.

                                    https://zmichaelgehlke.com/misc/in-harmony.html

                                    • poszlem 3 hours ago

                                      The reason people "dislike AI because of its tone" is that they only recognize AI-written text when it's poorly composed. It's likely that you're already interacting with AI-generated text without realising it.

                                      • happytoexplain 3 hours ago

                                        I disagree. The "tone" of the vast majority of AI to me has a clear identity, and is not related to poor composition. It is in fact "high quality", technically speaking, and is a separate problem from that of bad/strange genAI writing. In fact, some real people do write in a similar tone, which makes it hard to tell apart sometimes, which is the problem you describe, but that doesn't obscure the fact that it is the "AI tone".

                                        • pxc 3 hours ago

                                          It takes too long to get anything said and it has a very weak sense of topicality that it papers over with repetitious use of literal words and phrases from the things it's asked.

                                          Afaict, it's 'high quality' only in the sense of not containing many grammatical mistakes or spelling errors, which of course is not 'high quality' so much as 'basic literacy'.

                                          I think I know what you mean about other elements of the tone (staid and obsequious) not necessarily being related to style but they don't quite form a clear identity for me. The defects of the writing are still more marked than its personality.

                                        • jprete 3 hours ago

                                          The possibility of well-written AI prose is a big reason why I dislike AI text generation. Writing for an audience refines the ideas being communicated. If the author doesn't do that refinement themselves, then I'm not reading what the author is thinking, I'm reading what the LLM could patch together from their ideas. If I want the LLM's opinion on how to make a concept work, I can ask it directly. If I'm reading something by an author, I want to know what the author is actually thinking!

                                          • zmgsabst 3 hours ago

                                            If you wanted a chair, you could build it yourself. So why do most people buy chairs?

                                            To be direct:

                                            Your post assumes that all users are equally good at producing content with AI/LLMs — but if they’re not, then people who are better providing that to people who are worse will become a market.

                                            Just like with chairs and writing, now.

                                      • S0y 3 hours ago

                                        Semi related but, googling a problem just to stumble on an 100% written by AI Medium article as the top result, is the bane of my existence.

                                        So really, I can get behind the author.

                                        • ayaen 3 hours ago

                                          Some people dont even bother with cleaning up the boiler plate:

                                          "Sure, here are 11 examples of recursive islands..." Yeah, that's how some articles begin

                                          • stonethrowaway 3 hours ago

                                            Assuming it answers the problem, what’s the issue?

                                            • Vegenoid 3 hours ago

                                              This assumption would often be false, which is the problem. I already have access to LLMs (that I leverage often), I’m using a search engine because I’m looking for high-quality, detailed info or real-world examples from someone who knows what they are talking about.

                                              This blind cheerleading of LLM-generated content filling the web is what pushes people to hate them.

                                              • bil7 3 hours ago

                                                usually I expect an instructional article/blog post/etc to have actually been tried and tested. If it has the format of "I had x problem, chatGPT suggested solution y, it actually worked, here it is and a bit about why it works" then of course that's fine and good. But you'll only have to google a few times before you see a 100% synthetic SEO optimised useless article that does nothing but waste your time.

                                                • sofixa 3 hours ago

                                                  It wastes your time because it's full of cruft. And you cannot assume to trust it will solve your problem, so you're risking losing even more time.

                                                  • ziddoap 3 hours ago

                                                    While I'm not about to defend AI blog spam, what you just said it completely applicable to human-generated blogs as well.

                                                    You should not just assume something is trustworthy or will solve your problem just because it was written by a human. And I have read plenty of cruft-filled blogs/articles/etc. from humans.

                                                    • rsynnott an hour ago

                                                      > While I'm not about to defend AI blog spam, what you just said it completely applicable to human-generated blogs as well.

                                                      It kind of depends on the subject. If the blog is about politics, say, then you may assume that the human author is likely to be willing to lie. If it's about a non-contentious subject, though, then it is highly unlikely the human author will lie, and if they don't know, then, well, okay, _some_ people will write articles about stuff that they don't understand, but it's not common. Most non-psychopaths, if they don't know the answer, won't just make something up.

                                                      Whereas the magic robot will very happily spew nonsense on any subject.

                                                  • tempfile 3 hours ago

                                                    The answer is probably wrong?

                                                • andersco 3 hours ago

                                                  This seems like too much of an absolutist stance regarding AI. AI is a tool and as such has both good and bad uses. For example, maybe I have something I feel important to share with the world but I am not a very good writer. If AI can help me express my own ideas more clearly and clean up my grammar, that to me is a great use of AI. On the contrary, letting AI just churn out articles wholesale to me would be an abuse rather than good use. Correct me if I’m wrong, but I think that also is consistent with Medium’s policy.

                                                  • Eddy_Viscosity2 3 hours ago

                                                    Medium's policy does allow accounts where AI can churn out articles wholesale, they just can't monetized. And the author does make the point that he would rather read imperfect human writing than AI assisted or otherwise 'the flaws make the personality'.

                                                    • ayaen 3 hours ago

                                                      I think it's a personal stance so the degree of absoluteness doesn't matter. It's what he prefers. We can tell him it's too harsh when he feels the pinch of cutting of all writers using LLMs. As far as use of LLMs to clean up ones language usage is concerned, there's a difference between editing the content it generates, and learning grammar patterns and word usages from it and applying them while writing on your own. When the latter is done, no one can tell.

                                                      • coding123 3 hours ago

                                                        Some things that may seem simple may actually be the cause of some people's demise.

                                                      • mulhoon 3 hours ago

                                                        The quality of Medium articles (and comments) has really gone down over the past few years. Lots of attention grabbing headlines “5 ways to X” “Stop doing X” and less well-written content overall. I’m not sure if most writers hopped over to Substack but it feels like a cheaper place than it used to be.

                                                        • Vegenoid 3 hours ago

                                                          There was a brief period where seeing a Medium article in a search result made me excited. Now, I avoid them, because of too many experiences with shallow, incorrect, or LLM-generated articles.

                                                        • helboi4 3 hours ago

                                                          I sort of love that he takes a strong stance here. I think even if you think there are some applications for AI, you should be able to strongly state where it is not useful. Having AI churn out bloat text in the form of terrible blogposts and misleading listicles that makes it harder for genuine information to be found online, is not a good use case for AI. If you want to ask ChatGPT to summarise a topic for you, you can literally just ask ChatGPT to do that. There is zero benefit to having a third party pumping that into websites that we go to to find real human opinions and hopefully a few genuinely great, expert articles - neither of which ChatGPT can produce.

                                                          • undefined 3 hours ago
                                                            [deleted]
                                                            • delichon 3 hours ago

                                                              >> AI assistance empowers an author to level up — to make their ideas clearer, for example, or help them express themselves in a second language

                                                              > And also, I vehemently disagree with this statement. Flaws express personality.

                                                              So for the same reasons, does he wish to not read content that has been assisted by spelling or grammar checking? Or an editor, proof reader or fact checker? Or a thesaurus or dictionary? Or is he only concerned when AI is applied to those roles?

                                                              • imranq 3 hours ago

                                                                I see where the author is coming from - like any new technology, AI is getting abused significantly: Better scams, stealing creator IP, developing thought crutches, emotional dependencies on artificial companions, vast energy usage, and the collapse of personality and knowledge across the societal mean.

                                                                Some people say that LLMs make you more productive, but guess what? The value captured by your productivity will only enrich the companies that employ you. Companies will hire fewer people and eliminate positions all for "shareholder value" and to give a small raise to a C-suite executive.

                                                                But at the same time, we could definitely envision a world where these models accelerate human creativity - smaller high performance LLMs working on the edge, responsible usage for learning and skill development, helping people build their own voice.

                                                                The key is to identify the subtle line between dependence and empowerment, and to know when one is accruing AI-debt at the expense of their own abilities.

                                                                • causal 3 hours ago

                                                                  > I am absolutely against artificial intelligence.

                                                                  I'm curious where someone like him draws the line. AI is an ambiguous term. He's an author and photographer, so perhaps AI has just come to mean LLMs and image generators?

                                                                  I might even agree with the thrust of his concerns, but this kind of diatribe always come off a bit rage-blind, and maybe brings the strength of the discussion down a little.

                                                                  • rsynnott an hour ago

                                                                    The industry only has itself to blame for this, because, for a decade or so, anything 'AI-ish' has almost always been branded as ML (presumably due to the previous AI winter, where the term 'AI' became poisonous to VCs). If people equate AI with generative AI, it is only because, well, _so does the industry_.

                                                                • ksec 3 hours ago

                                                                  What other simple blogging platform alternative are there?

                                                                  • Telemakhos 3 hours ago

                                                                    There's Google's blogspot, but that's not trendy. There's Wordpress, but that's not trendy. There are a host of blogging tools out there, but they're not trendy.

                                                                    The question isn't what other simple platform is out there: the question is what blogging software do you want to be seen using. People don't want blogspot.com at the end of their domain names, and they don't want to look like they're using blogspot.com. They didn't mind Medium, and that's the difference. It's not a question of technology, which has existed for decades; it's a question of marketing and brands.

                                                                    • alexjplant 3 hours ago

                                                                      Several years ago static site generators were all the hotness. Around then I switched to Hugo [1] from Wordpress and it's been a good experience. I do all editing locally with the CLI then chuck it to Git to be built and hosted by Netlify.

                                                                      [1] https://gohugo.io/

                                                                      • sofixa 3 hours ago

                                                                        > You do all editing locally with the CLI then chuck it to Git to be built and hosted by Netlify.

                                                                        Or CloudFlare Pages, orGitHub Pages, or Firebase Hosting, or Fastly, or Vercel, or S3+CloudFront, or your own nginx instance.

                                                                        • alexjplant 3 hours ago

                                                                          Edited to reflect that it's my workflow specifically.

                                                                      • S0y 3 hours ago
                                                                        • mulhoon 3 hours ago
                                                                        • axegon_ 3 hours ago

                                                                          I mean... It was bound to happen. The dead internet theory became a reality and I don't blame people for being disappointed. Twitter is full of ai bots(verified at that mind you), so is reddit and I recon most of social media platforms. Medium is no different in that regard - they want two things: content and engagement. AI gives them an ungodly amount of content. Crap content, no doubt about it, but currently they are still banking on quantity and not quality, which will bite them back sooner or later - selling 10 burgers for 50 bucks each is a much better financial decision and an easier execution than to sell 100 burgers at 5 bucks a pop. A lot of people(Medium included) don't get that. The sad part is that LLM's can only juggle concepts around by predicting the most likely following token, which would work in a static and unchanging world. I get where the author is coming from and I feel for him. The reality is that large platforms are becoming bottomless pits of ai-generated content and it would never pay off in any meaningful way to try and compete with a regular, consumer grade GPU that can spit out 20 articles an hour, while you spend a week writing a single one.

                                                                          The way I see it, the only true solution(for both writer or reader) is to go back to the good old-fashioned self-hosted blogs, expose an RSS and rely on RSS readers(yes, those still exist). You will never be able to compete with the exposure that you'd get on Medium or any other platform but at the end of the day, I'd rather pay 50 bucks for a good burger than 5 bucks for a cheap bun from Lidl with some very questionable substances inside.

                                                                          • yawnxyz 3 hours ago

                                                                            But all the other non-AI slop that Medium put up over the years were fine with him?

                                                                            • AndrewKemendo 3 hours ago

                                                                              HTML only websites still work that you can host yourself trivially

                                                                              These platforms are all the same: put your middling content here and you’ll be a star

                                                                              In my experience most people have valueless content and target an affinity group audience. They produce enough content to have a viral hit, that viral hit will then become your first ratchet. You then adjust your content type based on the virality and then you just continue the ratchet. Iirc this is the playbook of eg MeBeast etc… and theres a deterministic outcome: create enough attention that marketing income consumes your lifestyle and drives all future ratchet iterations

                                                                              If you have anything of value people will figure out how to get it but it won’t be fast and it will be mostly ignored

                                                                              • joshstrange 3 hours ago

                                                                                Why this writer ever thought Medium ever cared about specifically human-written stories is beyond me. Medium is a bad idea for many reasons but AI doesn’t make my list at all. Also the stanch anti-AI stance is incredibly off putting, saying things like they won’t even read an article with an AI-generated image or AI-assisted (as if they can actually tell with certainty). AI (LLMs) are a tool, nothing more, no different than technology itself is a tool. I find all these anti-AI takes extremely tiring and repetitive.

                                                                                My eyes about rolled out of my head reading through this.

                                                                                My money is on either this writer fading into obscurity or using AI tools in the future when they realize their stance was silly.

                                                                                • stonethrowaway 3 hours ago

                                                                                  Was the author ever not-obscure?

                                                                                • josefritzishere 3 hours ago

                                                                                  I feel the same way about AI. It is virtually all downside, no upside, at least on a species level.

                                                                                  • blackeyeblitzar 3 hours ago

                                                                                    What about LinkedIn? They just added policies about how they’ll use your content for AI. But I doubt most users are in a position to leave.

                                                                                    • stonethrowaway 3 hours ago

                                                                                      Just rest and vest brah.

                                                                                      In all seriousness though, there isn’t a competitive platform out there that won’t adopt AI or any other technology if it meant making money. If they chose not to, they would be giving ground to someone who would. That’s capitalism in a nutshell.

                                                                                      You aren’t their customer. It’s the eyeballs they’re after and, any particular authors word isn’t really precious. Enough authors/writers out there who are willing to replace you.

                                                                                      Remember kids: “Content” is the word for things _between_ the ads.

                                                                                      Cheers.

                                                                                      • FrustratedMonky 3 hours ago

                                                                                        Leave and go where. Seems like a losing battle, eventually all platforms will be co-opted to feed AI. Even if you have a private web site (blog), it can be scraped, and how are the little guys going to fight it?

                                                                                        • erickhill 3 hours ago

                                                                                          Can we not put lines in our robots.txt files to block being crawled?

                                                                                          • cjf101 3 hours ago

                                                                                            There was a bunch of reporting on how AI companies and researchers were using tools that ignored robots.txt. It's a "polite request" that these companies had a strong incentive to ignore, so they did. That incentive is still there, so it is likely that some of them will continue to do so.

                                                                                            • Ukv 3 hours ago

                                                                                              CommonCrawl[0] and the companies training models I'm aware of[1][2][3] all respect robots.txt for their crawling.

                                                                                              If we're thinking of the same reporting, it was based on a claim by TollBit (a content licensing startup) which was in turn based the fact that "Perplexity had a feature where a user could prompt a specific URL within the answer engine to summarize it". Actions performed by tools acting as a user agent (like archive.today, or webpage-to-PDF site, or a translation site) aren't crawlers and aren't what robots.txt is designed for, but either way the feature is disabled now.

                                                                                              [0]: https://commoncrawl.org/faq

                                                                                              [1]: https://platform.openai.com/docs/bots

                                                                                              [2]: https://support.anthropic.com/en/articles/8896518-does-anthr...

                                                                                              [3]: https://blog.google/technology/ai/an-update-on-web-publisher...

                                                                                              • FrustratedMonky 2 hours ago

                                                                                                Robots.txt is a suggestions. As is reporting on using it.

                                                                                                The companies that are ignoring robots.txt, are also probably the companies not advertising that they are ignoring robots.txt.

                                                                                                • Ukv an hour ago

                                                                                                  The EU's AI act points to the DSM directive's text and data mining exemption, allowing for commercial data mining so long as machine-readable opt-outs are respected - robots.txt is typically taken as the established standard for this.

                                                                                                  In the US it is a suggestion (so long as Fair Use holds up) but all I've seen suggests that the major players are respecting it, and minor players tend to just use CommonCrawl which also does. Definitely possible that some slip through the cracks, but I don't think it's as useless as is being suggested.

                                                                                            • CrazyStat 3 hours ago

                                                                                              AI companies are ignoring robots.txt in the race to slurp up the entire internet [1].

                                                                                              [1] https://www.reuters.com/technology/artificial-intelligence/m...

                                                                                              • undefined 3 hours ago
                                                                                                [deleted]
                                                                                              • wkat4242 3 hours ago

                                                                                                Yeah and even better: share blocklists of known AI crawler IPs so we can just block them. Robots.txt is too voluntary.

                                                                                                • rvbissell 3 hours ago

                                                                                                  Someone who doesn't care about polluting our corpus is not going to care about your robots.txt

                                                                                              • TekMol 3 hours ago

                                                                                                Ice delivery guy goes on strike. Demands ban of fridges.

                                                                                                • ayaen 3 hours ago

                                                                                                  I am sorry, the analogy is completely lost on me. Care to expand? Thanks.

                                                                                                  • timkq 3 hours ago

                                                                                                    I'll add another analogy of my own: "guy harvests his own ice, is mad at others for buying it from the store"

                                                                                                    He is mad at others using AI on his favorite blogging website. I think it is an insult to himself, because most AI produced articles etc. are unreadable and full of useless filler information - I think the guy can do better.

                                                                                                    • TekMol 3 hours ago

                                                                                                      Here is how AI explains it:

                                                                                                      ...

                                                                                                      TekMol's comment is using a historical analogy to make a point about the current situation with AI. Let's break it down:

                                                                                                      The Analogy Explained

                                                                                                          Ice delivery guy = Workers in traditional industries
                                                                                                          Going on strike = Protesting against technological change
                                                                                                          Demanding a ban on fridges = Calling for a ban on AI
                                                                                                      
                                                                                                      Historical Context

                                                                                                      Before refrigerators became common household appliances, there was an industry of ice delivery. Workers would cut ice from frozen lakes or produce it in ice houses, then deliver it to homes and businesses for food preservation. The Point of the Analogy

                                                                                                      TekMol is suggesting that demanding a ban on AI is similar to how ice delivery workers might have reacted to the invention of refrigerators. Just as refrigerators made ice delivery obsolete, AI is perceived as a threat to certain jobs or industries.

                                                                                                      The Implied Message

                                                                                                      The comment implies that trying to ban AI (or expressing hate for it) is:

                                                                                                          Futile - like trying to stop technological progress
                                                                                                          Short-sighted - not recognizing the potential benefits of the new technology
                                                                                                          Self-defeating - potentially harming oneself by resisting inevitable change
                                                                                                      
                                                                                                      In essence, the analogy is pointing out that technological progress often disrupts existing industries, but attempting to halt this progress is generally ineffective and may be counterproductive.

                                                                                                      ...

                                                                                                      This was my prompt:

                                                                                                      TekMol writes a comment about an article where the author expressis his hate for AI: "Ice delivery guy goes on strike. Demands ban of fridges."

                                                                                                      ayaen asks: "I am sorry, the analogy is completely lost on me. Care to expand? Thanks."

                                                                                                      Can you explain the analogy to ayaen?

                                                                                                      • poszlem 3 hours ago

                                                                                                        The analogy compares a writer complaining about generative AI to an ice delivery person demanding a ban on refrigerators. Here's the breakdown:

                                                                                                        Ice delivery guy = Traditional writer Going on strike = Complaining or protesting Demanding ban of fridges = Calling for restrictions on generative AI The analogy suggests that the writer's complaint about generative AI is as futile and outdated as an ice delivery person trying to ban refrigerators. Just as refrigerators made ice delivery largely obsolete, the implication is that generative AI might be seen as a technological advancement that could potentially impact traditional writing roles.

                                                                                                      • happytoexplain 3 hours ago

                                                                                                        Why be so snide and dismissive regarding people's livelihoods?

                                                                                                        • TekMol 3 hours ago

                                                                                                          What I tried to express is how we will look back on this in the future. Demanding a ban of AI will look just as insane as it would be to demand a ban of the wheel, metal, electricity, machines, computers or the internet.

                                                                                                          • poszlem 3 hours ago

                                                                                                            We have been through this exact scenario hundreds of times with different technologies. As usual, someone's loss is another's gain. For example, people who have ideas to express but are not native English speakers, or those who struggle with expressing themselves in writing, will absolutely benefit from this technology.