• padolsey a day ago

    I'm surprised to see so little coverage of AI legislation news here tbh. Maybe there's an apathy and exhaustion to it. But if you're developing AI stuff, you need to keep on top of this. This is a pretty pivotal moment. NY has been busy with RAISE (frontier AI safety protocols, audits, incident reporting), S8420A (must disclose AI-generated performers in ads), GBL Article 47 (crisis detection & disclaimers for AI chatbots), S7676B (protects performers from unauthorized AI likenesses), NYC LL144 (bias audits for AI hiring tools), SAFE for Kids Act [pending] (restricts algorithmic feeds for minors). At least three of those are relevant even if your app only _serves_ people in NY. It doesn't matter where you're based. That's just one US state's laws on AI.

    It's kinda funny the oft-held animosity towards EU's heavy-handed regulations when navigating US state law is a complete minefield of its own.

    • raincole a day ago

      > I'm surprised to see so little coverage of AI legislation news here tbh.

      Because no one believes these laws or bills or acts or whatever will be enforced.

      But I actually believe they'll be. In the worst way possible: honest players will be punished disproportionally.

      • padolsey a day ago

        > Because no one believes these laws or bills or acts or whatever will be enforced.

        Time will tell. Texas' sat on its biometric data act quite quietly then hammered meta with a $1.4B settlement 20 years after the bill's enactment. Once these laws are enacted, they lay quietly until someone has a big enough bone to pick with someone else. There are already many traumatic events occurring downstream from slapdash AI development.

        • Ajedi32 20 hours ago

          That's even worse, because then it's not really a law, it's a license for political persecution of anyone disfavored by whoever happens to be in power.

          • dylan604 19 hours ago

            Never mind the damage that was willfully allowed to happen that the bill was supposed to protect from happening.

          • vulcan01 20 hours ago

            Meta made $60B in Q4 2025. A one-time $1.4B fine, 20 years after enactment, is not "getting hammered".

            • Retric 17 hours ago

              They didn’t make $60B in Q4 2025 in Texas. 1.4B was 100% profit from Texas for years, that a big fine.

              • saalweachter 16 hours ago

                You also have to ask "how much is the specific thing in the lawsuit worth to Meta?"

                I don't know how much automatically opting everyone in to automatic photo tagging made Meta, but I assume its "less than 100% of their revenue".

                Barring the point of contention being integral to the business's revenue model or management of the company being infected with oppositional defiant disorder a lawsuit is just an opportunity for some middle manager + team to get praised for making a revenue-negative change that reduces the risk of future fines.

                Work like that is a gold mind; several people will probably get promoted for it.

                • ninalanyon 16 hours ago

                  Big for Texas, not for Meta.

                  • Retric 14 hours ago

                    It’s under 5 hours of GDP for Texas. It’s a big fine, but not a huge deal for either party.

                    • abustamam 9 hours ago

                      So what's the point? If neither party is really affected by a penalty (no diacernible benefit or loss to either), then is it all just performative?

                      Maybe I just answered my own question.

                      • Retric 8 hours ago

                        Things don’t need to be huge deals to influence behavior or be a net gain.

                        I bet you’ve taken a shortcut to save less than 1h for example.

                        • abustamam 4 hours ago

                          I think time is different because it's finite. I admit I'll still opt for store brand to save a few bucks even making an engineering salary. But I'll also do something "illegal" (like parking at a metered spot without paying) to save time or otherwise do what I want and just deal with whatever financial cost incurred if I know it won't break me.

                          A saying I've heard is that if the punishment for a crime is financial, then it is only a deterrent for those who lack the means to pay. Small business gets caught doing bad stuff, a $30k fine could mean shutting down. Meta gets caught doing bad stuff, a billion dollar fine is almost a rounding error in their operational expenses.

              • OGEnthusiast 19 hours ago

                > Texas' sat on its biometric data act quite quietly then hammered meta with a $1.4B settlement 20 years after the bill's enactment.

                Sounds like ignoring it worked fine for them then.

                • jandrese 18 hours ago

                  That sounds like it will be in the courts for ages before Facebook wins on selective prosecution.

                • SAI_Peregrinus 19 hours ago

                  Or it'll end up like California cancer warnings: every news site will put the warning on, just in case, making it worthless.

                  • Wistar 18 hours ago

                    … or the sesame seed labeling law that resulted in sesame seeds being added to everything.

                    https://apnews.com/article/sesame-allergies-label-b28f8eb3dc...

                    • sebastiennight 14 hours ago

                      Wow, it's always amazing to me how the law of unintended consequences (with capitalistic incentives acting as the Monkey's Paw) strikes everytime some well-intended new law gets passed.

                      • nemo 15 hours ago

                        As someone who is allergic to sesame, that is insanely annoying.

                      • clickety_clack 16 hours ago

                        There just can’t be a way to discriminate on the spectrum from “we use AI to tidy up the spelling and grammar” to “we just asked ChatGPT to write a story on x”, so the disclaimer will make it look like everyone just asked ChatGPT.

                        • slg 14 hours ago

                          >There just can’t be a way to discriminate on the spectrum from “we use AI to tidy up the spelling and grammar” to “we just asked ChatGPT to write a story on x”

                          Why though? Did the AI play the role of an editor or did it play the role of a reporter seems like a clear distinction to me and likely anyone else familiar enough with how journalism works.

                          • clickety_clack 8 hours ago

                            People know what it _should_ mean, but if you say that it’s fine to have an AI editor, then there will be a bunch of people saying something like “my reporting is that x is a story, and my editor, ChatGPT, just tidied that idea up into a full story”. There’s all sorts of hoops people can jump through like that. So you end up putting a banner on all AI, or only penalizing the honest people who follow the distinction that’s supposed to exist.

                            • slg 3 hours ago

                              Fair enough, but my main response to that is that people need to support independent journalism. It's entirely possible I'm paying some fraud(s), but as someone who certainly spends more than the average person on online journalism, I trust the people I support at the very least know that putting their byline on an AI written article would be a career destroying scandal in the eyes of their current audience.

                        • sodapopcan 15 hours ago

                          I just came across this for the first time. I ordered a precision screw driver kit and it came with a cancer warning on it. I was really taken aback and then learned about this.

                          • mrandish 13 hours ago

                            Some legislation which sounds good in concept and is well-intended ends up being having little to no positive impact in practice. But it still leaves businesses with ongoing compliance costs/risks, taxpayers footing the bill for an enforcement bureaucracy forever and consumers with either annoying warning interruptions or yet more 'warning message noise'.

                            It's odd that legislators seem largely incapable of learning from the rich history of past legislative mistakes. Regulation needs to be narrowly targeted, clearly defined and have someone smart actually think through how the real-world will implement complying as well as identifying likely unintended consequences and perverse incentives. Another net improvement would be for any new regs passed to have an automatic sunset provision where they need to be renewed a few years later under a process which makes it easy to revise or relax certain provisions.

                            • thesmtsolver2 7 hours ago

                              It makes sense once you understand law makers generally care about their careers more than the state/country/citizens.

                              Most of it is performative law making.

                          • vablings 15 hours ago

                            Known by the state of cancer to cause California. I do think P65 warnings are pretty useful for the most part jokes aside

                            • 8cvor6j844qw_d6 15 hours ago

                              Essentially useless if everyone slaps on that label. Kinda like hospital alarm fatigue.

                              But this just my uninformed opinion, perhaps those that work in the health industry think differently.

                              • datsci_est_2015 14 hours ago

                                Maybe it’s not a fair comparison, but I think it’s been shown that tobacco warnings are effective even though they’re so common to be “fatigued”.

                                • DrinkingRedStar 14 hours ago

                                  I do believe this is an unfair comparison. With tobacco the warnings are always true, but with prop 65 the product might not contain any cancer causing ingredients, but the warning is there just in case.

                                  It's much easier to tell yourself prop 65 doesn't have to be avoided because "it's probably just there to cover their asses" wile tobacco products have real warnings that definitely mean danger (though there are people who convince themselves otherwise_

                              • bigstrat2003 5 hours ago

                                I don't know of anyone (seriously not one person) who actually believes those labels. And the reason why is precisely because the government was foolish enough to put them on everything under the sun. Now nobody listens to them because the seriousness got diluted.

                              • _blk 15 hours ago

                                Yup. Or like "necessary cookies" that aren't all that necessary when it works just fine without.

                                • charcircuit 12 hours ago

                                  Just because you doing notice that it is not working properly, that doesn't mean you haven't broken anything.

                                  • subscribed 13 hours ago

                                    Well, they're necessary if you're spying on your visitors.

                                • Galanwe a day ago

                                  How about a pop-up on websites, next to the tracking cookie ones, to consent reading AI generated text?

                                  I see a bright future for the internet

                                  • raw_anon_1111 15 hours ago

                                    Don’t give the EU any ideas

                                  • cheschire a day ago

                                    Yeah it’s like that episode of schoolhouse rock about how a bill becomes a law now takes place in squid games.

                                    • razingeden 12 hours ago

                                      >But I wonder who that sad little scrap of 8,523 pieces of paper is?

                                    • AbstractH24 7 hours ago

                                      > Because no one believes these laws or bills or acts or whatever will be enforced.

                                      This

                                      I still regularly see job posting with no salary here in nyc. Never heard of any enforcement

                                      • tedggh 17 hours ago

                                        Probably worse than that. I can totally see it being weaponized. A media company critic o a particular group or individual being scrutinized and fined. I haven’t looked at any of these laws, but I bet their language gives plenty of room for interpretation and enforcement, perhaps even if you are not generating any content with AI.

                                        • crimsonsupe a day ago

                                          > Because no one believes these laws or bills or acts or whatever will be enforced.

                                          That’s because they can’t be.

                                          People assume they’ve already figured out how AI behaves and that they can just mandate specific "proper" ways to use it.

                                          The reality is that AI companies and users are going to keep refining these tools until they're indistinguishable from human work whenever they want them to be.

                                          Even if the models still make mistakes, the idea that you can just ban AI from certain settings is a fantasy because there’s no technical way to actually guarantee enforcement.

                                          You’re essentially passing laws that only apply to people who volunteer to follow them, because once someone decides to hide their AI use, you won't be able to prove it anyway.

                                          • chrisjj 21 hours ago

                                            > the idea that you can just ban AI from certain settings is a fantasy because there’s no technical way to actually guarantee enforcement.

                                            By that token bans on illegal drugs are fantasy. Whereas in fact, enforcement doesn't need to be guaranteed to be effective.

                                            There may be little technical means to distinguish at the moment. But could that have something to do with lack of motivation? Let's see how many "AI" $$$ suddenly become available to this once this law provides the incentive.

                                            • amanaplanacanal 20 hours ago

                                              > By that token bans on illegal drugs are fantasy.

                                              I think you have this exactly right. They are mostly enforced against the poor and political enemies.

                                              • raw_anon_1111 15 hours ago

                                                Well considering how ineffective the War on Drugs has been - is that really a great analogy?

                                                • chrisjj 14 hours ago

                                                  > considering how ineffective the War on Drugs has been

                                                  Relative to no war on drugs? Who knows.

                                                  • raw_anon_1111 14 hours ago

                                                    Has there ever been a single person who wants an illegal drug that couldn’t get one because it was illegal?

                                                    Just a quick Google search g estimates that less than 3% of drugs are intercepted by the government.

                                                    • subscribed 13 hours ago

                                                      Me. There are four I want. All very safe.

                                                      I always wanted to try specific two, but first cannot be had in the safest form because of the specific precursor ban, and all of them suffer from an insane (to me) risk of adulteration.

                                                      In twenty minutes I could probably find 10 "reputable" shops/markets, but still with 0 guarantee I won't get the specific thing laced with something for strength.

                                                      Even if I wanted pot (I don't, I found it repetitive and extremely boring, except for one experience), I would have to grow it myself (stench!) but then.... where I find sane seeds (healthy ratio CBD to THC)?

                                                      Similarly, I wouldn't buy the moonshine from someone risking prosecution to make and sell it. It's guaranteed this risk is offset.

                                                      So ... I can't get what I want because there's extremely high chance of getting hurt. An example being poisoning with pills sold as mdma - every music festival, multiple people hurt. Not by Molly, by additives.

                                                      • raw_anon_1111 13 hours ago

                                                        I’m absolutely positive that someone in your 1st degree or 2nd degree social circle can get you weed if you wanted it.

                                              • 6LLvveMx2koXfwn a day ago

                                                > You’re essentially passing laws that only apply to people who volunteer to follow them . .

                                                Like every law passed forever (not quite but you get the picture!) [1]

                                                1. https://en.wikipedia.org/wiki/Consent_of_the_governed

                                                • rconti 17 hours ago

                                                  Sure they can be enforced. Your comment seems to be based on the idea of detecting AI writing from the output. But you can enforce this law based on the way content is created. The same way you can enforce food safety laws from conditions of the kitchen, not the taste of the food. Child labor laws can be enforced. And so on.

                                                  Unless you're trying to tell me that writers won't report on their business that's trying to replace them with AI.

                                                  • songodongo a day ago

                                                    And you can easily prompt your way out of the typical LLM style. “Written in the style of Cormac McCarthy’s The Road”

                                                    • capnrefsmmat a day ago

                                                      No, that doesn't really work so well. A lot of the LLM style hallmarks are still present when you ask them to write in another style, so a good quantitative linguist can find them: https://hdsr.mitpress.mit.edu/pub/pyo0xs3k/release/2

                                                      That was with GPT4, but my own work with other LLMs show they have very distinctive styles even if you specifically prompt them with a chunk of human text to imitate. I think instruction-tuning with tasks like summarization predisposes them to certain grammatical structures, so their output is always more information-dense and formal than humans.

                                                      • Der_Einzige 14 hours ago

                                                        This still doesn't remove all the slop. You need sampler or fine-tuning tricks for it. https://arxiv.org/abs/2510.15061

                                                      • wwfn a day ago

                                                        > passing laws that only apply to people who volunteer to follow them

                                                        That's a concerning lens to view regulations. Obviously true, but for all laws. Regulations don't apply to only to what would be immediately observable offenses.

                                                        There are lots of bad actors and instances where the law is ignored because getting caught isn't likely. Those are conspiracies! They get harder to maintain with more people involved and the reason for whistle-blower protections.

                                                        VW's Dieselgate[1] comes to mind albeit via measurable discrepancy. Maybe Enron or WorldCom (via Cynthia Cooper) [2] is a better example.

                                                        [1]: https://en.wikipedia.org/wiki/Volkswagen_emissions_scandal [2]: https://en.wikipedia.org/wiki/MCI_Inc.#Accounting_scandals

                                                        • hsuduebc2 10 hours ago

                                                          But most regulations are, and can be, enforced because the perpetrator can simply be caught. That’s the difference. This is not enforceable in any meaningful way. The only way it could change anything would be through whistleblowers, for example someone inside a major outlet like the New York Times reporting to authorities that AI was being used. On the contrary, if you systematically create laws that are, by their nature, impossible to enforce, you weaken trust in the law itself by turning it into something that exists more on paper than in reality.

                                                        • delaminator a day ago

                                                          C2PA-enabled cameras (Sony Alpha range, Leica, and the Google Pixel 10) sign the digital images they record.

                                                          So legislators, should they so choose, could demand source material be recorded on C2PA enabled cameras and produce the original recordings on demand.

                                                          • Forgeties79 20 hours ago

                                                            The idea that you can just ban drinking and driving is a fantasy because there’s no technical way to actually guarantee enforcement.

                                                            I know that sounds ridiculous but it kind of illustrates the problem with your logic. We don’t just write laws that are guaranteed to have 100% compliance and/or 100% successful enforcement. If that were the case, we’d have way fewer laws and little need for courts/a broader judicial system.

                                                            The goal is getting most AI companies to comply and making sure that most of those that don’t follow the law face sufficient punishment to discourage them (and others). Additionally, you use that opportunity to undo what damage you can, be it restitution or otherwise for those negatively impacted.

                                                            • conartist6 a day ago

                                                              Indistinguishable, no. Not these tools.

                                                              Without emotion, without love and hate and fear and struggle, only a pale imitation of the human voice is or will be possible.

                                                            • just_once a day ago

                                                              What does that look like? Can you describe your worst case scenario?

                                                              • jandrese 18 hours ago

                                                                Highly selective enforcement along partisan lines to suppress dissent. Government officials forcing you to prove that your post is not AI generated if they don't like it. Those same officials claiming that it is AI generated regardless of the facts on the ground to have it removed and you arrested.

                                                                • idle_zealot 17 hours ago

                                                                  If you assume the use of law will be that capricious in general, then any law at all would be considered too dangerous for fear of use as a partisan tool.

                                                                  Why accuse your enemies of using AI-generated content in posts? Just call them domestic terrorists for violently misleading the public via the content of their posts and send the FBI or DHS after them. A new law or lack thereof changes nothing.

                                                                • amelius a day ago

                                                                  Worst case? Armed officers entering your home without warrant, taking away your GPU card?

                                                                  • just_once a day ago

                                                                    They can do that anyway. What does that have to do with the content of the proposed law?

                                                                • mmooss 16 hours ago

                                                                  The primary obstacle is discussions like this one. It will be enforced if people insist it's enforced - the power comes from the voters. If a large portion of the population - especially the informed population, represented to some extent here on HN - thinks it's hopeless then it will be. If they believe they will get together to make it succeed, it will. It's that simple: Whatever people believe is the number one determination of outcome. Why do you think so many invest so much in manipulating public opinion?

                                                                  Many people here love SV hackers who have done the impossible, like Musk. Could you imagine this conversation at an early SpaceX planning meeting? That was a much harder task, requiring inventing new technology and enormous sums of money.

                                                                  Lots of regulations are enforced and effective. Your food, drugs, highways, airplane flights, etc. are all pretty safe. Voters compelling their representatives is commonplace.

                                                                  It's right out of psyops to get people to despair - look at messages used by militaries targeted at opposing troops. If those opposing this bill created propaganda, it would look like the comments in this thread.

                                                                  • cucumber3732842 19 hours ago

                                                                    >But I actually believe they'll be. In the worst way possible: honest players will be punished disproportionally.

                                                                    As with everything else BigCo with their legal team will explain to the enforcers why their "right up to the line if not over it" solution is compliant and mediumco and smallco will be the ones getting fined or being forced to waste money staying far from the line or paying a 3rd party to do what bigco's legal team does at cost.

                                                                    • sumeno 21 hours ago

                                                                      Who are the honest players generating AI slop articles

                                                                      • chrisjj 21 hours ago

                                                                        The ones honestly labelling their articles e.g. "AI can make mistakes". Full marks to Google web search for leading the way!

                                                                    • tencentshill 16 hours ago

                                                                      I'll bet AI is going to be simply outlawed for hiring, and possibly algorithmic hiring practices altogether. You can't audit a non-deterministic system unless you train the AI from scratch, which is an expense only the wealthiest companies can take on.

                                                                      • mbreese 19 hours ago

                                                                        None of those bills/laws involve legislating publishing though. This bill would require a disclaimer on something published. That’s a freedom of speech issue, so it going to be tougher to enforce and keep from getting overturned in the courts. The question here are what are the limits the government can have on what a company publishes, regardless of how the content is generated.

                                                                        IMO, It’s a much tougher problem (legally) than protecting actors from AI infringement on their likeness. AI services are easier to regulate.. published AI generated content, much more difficult.

                                                                        The article also mentions efforts by news unions of guilds. This might be a more effective mechanism. If a person/union/guild required members to add a tagline in their content/articles, this would have a similar effect - showing what is and what is not AI content without restricting speech.

                                                                        • tempodox 19 hours ago

                                                                          > This bill would require a disclaimer on something published. That’s a freedom of speech issue

                                                                          They can publish all they want, they just have to label it clearly. I don’t see how that is a free speech issue.

                                                                          • mothballed 19 hours ago

                                                                            Because compelled speech is an insult to free speech just as censored speech is.

                                                                            • fwip 15 hours ago

                                                                              How do you feel about the fact that manufacturers need to list the ingredients of the food they sell you?

                                                                              • mothballed 13 hours ago

                                                                                Not thrilled about it, and I personally would rather see them repealed. I will concede compelled speech impositions have been interpreted more generously when they are commercial. I don't necessarily agree with it, but even if we concede they can happen, I hope that distinction is made for commercial vs non-commercial content. Though I'm not thrilled with it happening for either.

                                                                          • HanShotFirst 18 hours ago

                                                                            Is AI-generated text speech?

                                                                            • frumplestlatz 17 hours ago

                                                                              It is when a human publishes it. Which is why they're also liable for it.

                                                                              • _blk 15 hours ago

                                                                                I agree in general and that should be the position but it's probably more nuanced than this in practice: who published it when it's a dev that writes a script that just spits junk into the wild or reinforces someone else's troll-speech?

                                                                                • mbreese 14 hours ago

                                                                                  In general, I think LLM content has been found to not be copyrightable, but it would still speech when it's published. It would be the speech of the company publishing it, not the dev that wrote the script. So, ai-junk-news.com is still publishing some kind of speech, even if it was an LLM that wrote it. At least, that would be my interpretataion.

                                                                          • toofy 11 hours ago

                                                                            > SAFE for Kids Act [pending] (restricts algorithmic feeds for minors).

                                                                            i personally would love to see something like this but changed a little:

                                                                            for every user (not just minors) require a toggle: upfront, not buried, always in your face toggle to turn off algorithmic feeds, where you’ll only see posts from people you follow, in the order in which they post it. again, no dark patterns, once a user toggles to a non-algorithmic feeds, it should stick.

                                                                            this would do a lot to restore trust. i don’t really use the big social medias much any more, but when i did i can not tell you how many posts i missed because the algorithms are kinda dumb af. like i missed friends anniversary celebrations, events that were right up my alley, community projects, etc… because the algorithms didn’t think the posts announcing the events would be addictive enough for me.

                                                                            no need to force it “for the kids” when they can just give everyone the choice.

                                                                            • Balinares 21 hours ago

                                                                              Don't ding the amusingly scoped animosity, it's very convenient: we get to say stuff like "Sure, our laws may keep us at the mercy of big corps unlike these other people, BUT..." and have a ready rationalization for why our side is actually still superior when you look at it. Imagine what would happen if the populace figured it's getting collectively shafted in a way others may not.

                                                                              • rubyfan 21 hours ago

                                                                                >Imagine what would happen if the populace figured it's getting collectively shafted in a way others may not.

                                                                                They already believe that and it’s used to keep us fighting each other.

                                                                              • totetsu 21 hours ago

                                                                                Ai view from Simmons+simmons is a very good newsletter on the topic of ai regulation https://www.simmons-simmons.com/en/publications/clptn86e8002...

                                                                                • venkat223 21 hours ago

                                                                                  All video and other contests should have ai stamp as most of the YouTube is AI generated.Almost like memes

                                                                                  • snickerbockers 12 hours ago

                                                                                    I honestly just don't see any point in these laws because they're all predicated on the people who own the AI's acting in good faith. In a way I actually think they're a net negative because they seem to be giving a false impression that these problems have an obvious solution.

                                                                                    One of the most persistent and also the dumbest opinion I keep seeing both among laymen and people who really ought to know better is that we can solve the deepfake problem by mandating digital watermarks on generated content.

                                                                                    • vasco a day ago

                                                                                      ~Everything will use AI at some point. This is like requiring a disclaimer for using Javascript back when it was introduced. It's unfortunate but I think ultimately a losing battle.

                                                                                      Plus if you want to mandate it, hidden markers (stenography) to verify which model generated the text so people can independently verify if articles were written by humans (emitted directly by the model) is probably the only feasible way. But its not like humans are impartial anyway already when writing news so I don't even see the point of that.

                                                                                      • layer8 a day ago

                                                                                        It would make sense to have a more general law about accountability for the contents of news. If news is significantly misleading or plagiarizing, it shouldn’t matter if it is due to the use of AI or not, the human editorship should be liable in either case.

                                                                                        This is a concept at least in some EU countries, that there has to always be one person responsible in terms of press law for what is being published.

                                                                                        • terminalshort 18 hours ago

                                                                                          That's government censorship and it not allowed here, unlike the EU. As for plagiarism, every single major news outlet is guilty of it in basically every single article. Have you ever seen the NYT cite a source?

                                                                                          • layer8 11 hours ago

                                                                                            You’re still allowed to say virtually anything you want if you make it clear that it’s an opinion and not news reporting.

                                                                                            Not citing sources doesn’t imply plagiarism, as long as you don’t misrepresent someone else’s research as your own (such as in an academic paper). Giving an account of news that you heard elsewhere in your own words isn’t plagiarism. The hurdles for plagiarism are generally relatively high.

                                                                                          • RobotToaster a day ago

                                                                                            That would bankrupt every news organisation in the USA.

                                                                                            • _blk 15 hours ago

                                                                                              Seems like a good idea then

                                                                                            • mothballed a day ago

                                                                                              If a news person in the USA publishes something that's actually criminal, the the corporate veil can be pierced. If the editor printed CSAM they would be in prison lickity split. Unless they have close connections to the executive.

                                                                                              Most regulations around disclaimers in the USA are just civil and the corporate veil won't be pierced.

                                                                                              • vasco a day ago

                                                                                                I agree with that the most. That's why I added the bit about humans. In the end if what you're writing is not sourced properly or too biased it shouldn't matter if AI is involved or not. The truth is more the thing that matters with news.

                                                                                            • jMyles a day ago

                                                                                              > I'm surprised to see so little coverage of AI legislation news here tbh.

                                                                                              I think the reason is that most people don't believe, at least on sufficiently long times scales, that legacy states are likely to be able to shape AI (or for that matter, the internet). The legitimacy of the US state appears to be in a sort of free-fall, for example.

                                                                                              It takes a long time to fully (or even mostly) understand the various machinations of legislative action (let alone executive discretion, and then judicial interpretation), and in that time, regardless of what happens in various capitol buildings, the tests pass and the code runs - for better and for worse.

                                                                                              And even amidst a diversity of views/assessments of the future of the state, there seems to be near consensus regarding the underlying impetus: obviously humans and AI are distinct, and hearing the news from a human, particular a human with a strong web-of-trust connection in your local society, is massively more credible. What's not clear is whether states have a role to play in lending clarity to the situation, or whether that will happen of the internet's accord.

                                                                                            • jfengel 19 hours ago

                                                                                              What I'd really like to see is a label on original reporting.

                                                                                              Even beyond AI, the vast majority of news is re-packaging information you got from somewhere else. AI can replace the re-writers, but not the original journalists, people who spoke to primary sources (or who were themselves eyewitnesses).

                                                                                              Any factual document should reference its sources. If not, it should be treated skeptically, regardless of whether AI or a human is doing that.

                                                                                              An article isn't automatically valueless just because it's synthesized. It can focus and contextualize, regardless of whether it's human or AI written. But it should at the very least be able to say "This is the actual fact of the matter", with a link to it. (And if AI has hallucinated the link, that's a huge red flag.)

                                                                                              • foxbarrington 18 hours ago

                                                                                                A common reaction I get to https://forty.news is that the stories “need sources” which I always find funny. I don’t hear the same demand of sources from every other news outlet (I find it extra weird because all FN’s stories are 40 years old, simple to verify, and can’t push an agenda the same way).

                                                                                                Totally agree with you: all newspapers should cite sources. What’s silly to me is how selectively people care—big outlets get to hand-wave the “trust me” part even when a piece is basically a lightly rewritten press release, thinly sourced, or reflecting someone’s incentives more than reality.

                                                                                                • squeaky-clean 10 hours ago

                                                                                                  For Forty News I don't think the "need sources" requests are for contents of the news stories. It's about where did these stories come from? How can I know these were ever actually published. As it currently is, I can't tell if these were pulled from real newspapers or AI generated to write a simulation of what the story might have been like if it were condensed to 10 sentences.

                                                                                                  • CodingJeebus 12 hours ago

                                                                                                    > all newspapers should cite sources.

                                                                                                    You'd lose a lot of valid sourcing if you made this a requirement. For example, the Catholic Church scandal investigation would never have seen the light of day if the key legal sources corroborating the story had to give up their identity as part of the process. Speaking off the record is often where a lot of those kinds of stories come together.

                                                                                                    And the reaction around the world to that story, the thousands of victims that came forward, resoundingly confirmed what people were saying on background.

                                                                                                    • Spivak 13 hours ago

                                                                                                      Well yeah because investigative journalism and original reporting outside of the spectacle of buying a plane ticket to a warzone or weather disaster to the reporter can have a dramatic background is too expensive when people come to you in droves with literally pre-written articles you can rubber stamp and publish.

                                                                                                      Which by the way if you ever want to get in the paper that's how, it's super easy. AI will help you learn how to write in the right tone/voice for news if you don't know how.

                                                                                                    • ntnsndr 15 hours ago

                                                                                                      For example, the Colorado Sun has labels on every story for the nature of reporting that went into it: https://coloradosun.com/

                                                                                                      Some may find it surprising that this is left over from the Sun's early support from the crypto journalism project Civil.

                                                                                                      • stahorn 19 hours ago

                                                                                                        Just like we want to know where the food we eat comes from, we want to know where the information comes from. Of course there's the limit of journalists having to keep their sources secret in many cases. But original publisher I think should be possible.

                                                                                                        • carlosjobim 18 hours ago

                                                                                                          There's already such a label: "exclusive!"

                                                                                                        • neilv 16 hours ago

                                                                                                          > Any news content created using generative AI must also be reviewed by a human employee “with editorial control” before publication.

                                                                                                          To emphasize this: it's important that the organization assume responsibility, just as they would with traditional human-generated 'content'.

                                                                                                          What we don't want is for these disclaimers to be used like the disclaimers of tech companies deploying AI: to try to weasel out of responsibility.

                                                                                                          "Oh no, it's 'AI', who could have ever foreseen the possibility that it would make stuff up, and lie about it confidently, with terrible effects. Aw, shucks: AI, what can ya do. We only designed and deployed this system, and are totally innocent of any behavior of the system."

                                                                                                          Also don't turn this into a compliance theatre game, like we have with information security.

                                                                                                          "We paid for these compliance products, and got our certifications, and have our processes, so who ever could have thought we'd be compromised."

                                                                                                          (Other than anyone who knows anything about these systems, and knows that the stacks and implementation and processes are mostly a load of performative poo, chosen by people who really don't care about security.)

                                                                                                          Hold the news orgs responsible for 'AI' use. The first time a news report wrongly defames someone, or gets someone killed, a good lawsuit should wipe out all their savings on staffing.

                                                                                                          • Llamamoe a day ago

                                                                                                            Ideally, trying to pass anything AI-generated as human-made content would be illegal, not just news, but it's a good start.

                                                                                                            • xnorswap a day ago

                                                                                                              That could do more harm than good.

                                                                                                              Like how California's bylaw about cancer warnings are useless because it makes it look like everything is known to the state of California to cause cancer, which in turn makes people just ignore and tune-out the warnings because they're not actually delivering signal-to-noise. This in turn harms people when they think, "How bad can tobacco be? Even my Aloe Vera plant has a warning label".

                                                                                                              Keep it to generated news articles, and people might pay more attention to them.

                                                                                                              Don't let the AI lobby insist on anything that's touched an LLM getting labelled, because if it gets slapped on anything that's even passed through a spell-checker or saved in Notepad ( somehow this is contaminated, lol ), then it'll become a useless warning.

                                                                                                              • Llamamoe 19 hours ago

                                                                                                                > That could do more harm than good.

                                                                                                                The downside to having labels on AI-written political comments, stellar reviews of bad products, speeches by a politician, or supposed photos of wonderful holiday destinations in ads targeted at old people are what, exactly?

                                                                                                                Are you really arguing that putting a label on AI generated content could do more harm than just leaving it (approximately) indistinguishable from the real thing might somehow be worse?

                                                                                                                I'm not arguing that we need to label anything that used gen AI in any capacity, but past the point of e.g. minor edits, yeah, it should be labeled.

                                                                                                                • terminalshort 17 hours ago

                                                                                                                  None of those AI written political comments will have the label added because it's unprovable, and those propaganda shops are based well outside of the necessary jurisdiction anyway. It will just be a burden on legitimate actors and a way for the government to harass legitimate media outlets that it doesn't like with expensive "AI usage investigations."

                                                                                                                • elric 15 hours ago

                                                                                                                  I bought a piece of wooden furniture some time ago. It came with a label saying that the state of California knows it to be a carcinogen. I live in Belgium. It was weird.

                                                                                                                  • frm88 38 minutes ago

                                                                                                                    The proposition 65 warnings apply to carcinogenic materials used on furniture surfaces which can be released into the air or accumulate in dust. None of these substances are a conditio sine qua non, there are alternatives. https://www.p65warnings.ca.gov/fact-sheets/furniture-product...

                                                                                                                    The same warnings and labels are used in the EU, for example for formaldehyde which will be severely limited in its use starting in August 2026. https://easecert.com/blogs/insights/formaldehyde-emission-li...

                                                                                                                    It may look weird, but personally I prefer a warning to being submitted to toxic substance without my knowledge.

                                                                                                                    • bogwog 15 hours ago

                                                                                                                      Just an observation, but this California meme seems like the go-to talking point for anti AI regulation crowd lately.

                                                                                                                      • elric 2 hours ago

                                                                                                                        That's a weird comparison, hadn't heard that one yet.

                                                                                                                        I'm very much in favour of regulating (and heavily taxing) AI. But I very much dislike silly warning labels that miss the point. Owning wooden furniture is not carcinogenic. Inhaling tons of wood dust (e.g. from sanding wood in a poorly ventilated room) could be carcinogenic. But putting such warning labels on furniture is just ridiculous scaremongering.

                                                                                                                        • turtlesdown11 13 hours ago

                                                                                                                          It's not even a good argument. Studies have demonstrated it reduces toxic chemicals in the body, and also deters companies from using the toxic chemicals in their products.

                                                                                                                      • cardanome 20 hours ago

                                                                                                                        > Don't let the AI lobby insist on anything that's touched an LLM getting labelled, because if it gets slapped on anything that's even passed through a spell-checker or saved in Notepad

                                                                                                                        People have been writing articles without the help of an LLM for decades.

                                                                                                                        You don't need an LLM for grammar and spell checking, arguably an LLM is less efficient and currently worse at it anyway.

                                                                                                                        The biggest help a LLM can provide is with research but that is only because search engines have been artificially enshitified these day. But even here the usefulness is very limited because of hallucinations. So you might be better off without.

                                                                                                                        There is no proof that LLMs can significantly improve the workflow of a professional journalist when it comes to creating high quality content.

                                                                                                                        So no, don't believe the hype. There will still be enough journalists not using LLMs at all.

                                                                                                                        • turtlesdown11 13 hours ago

                                                                                                                          > Like how California's bylaw about cancer warnings are useless

                                                                                                                          Californians have measurably lower concentrations of toxic chemicals than non-California's, so very useless!

                                                                                                                          • direwolf20 a day ago

                                                                                                                            Imagine selling a product with the tagline: "Unlike Pepsi, ours doesn't cause cancer."

                                                                                                                            • SkyBelow a day ago

                                                                                                                              It is worse, even less than useless. With the California case, there is very little go gain by lying and not putting a sticker on items that should have one. With AI generated content, as the models get to the point we can't tell anymore if it is fake, there are plenty of reasons to pass off a fake as real, and conditioning people to expect an AI warning will make them more likely to fall for content that ignores this law and doesn't label itself.

                                                                                                                            • driverdan 21 hours ago

                                                                                                                              What does that mean though? Photos taken using mobile camera apps are processed using AI. Many Photoshop tools now use AI.

                                                                                                                              • Llamamoe 19 hours ago

                                                                                                                                Obviously it should not apply to anything using machine learning based algorithms in any way, just content made using generative AI, with exceptions for minor applications and/or a separate label for smaller edits.

                                                                                                                              • reliabilityguy a day ago

                                                                                                                                How do we know what’s AI-generated vs. sloppy human work? Of course in some situations it is obvious (e.g., video), but text? Audio?

                                                                                                                                • FeteCommuniste 21 hours ago

                                                                                                                                  And of course you can even ask AI to add some "human sloppiness" as part of the prompt (spelling mistakes, run-on sentences, or whatever).

                                                                                                                                • ppeetteerr 16 hours ago

                                                                                                                                  Publishing is more than just authoring. You have research, drafts, edits, source verification, voice, formatting, multiple edits for different platforms and mediums. Each one of those steps could be done by AI. It's not a single-shot process.

                                                                                                                                  • pezgrande a day ago

                                                                                                                                    Where we put the line within AI-generate vs AI-assisted (aka Photoshop and other tools)?

                                                                                                                                    • sekai 21 hours ago

                                                                                                                                      > Ideally, trying to pass anything AI-generated as human-made content would be illegal, not just news, but it's a good start.

                                                                                                                                      Does photoshop fall under this category?

                                                                                                                                      • hermannj314 21 hours ago

                                                                                                                                        Spell check, autocomplete, grammar editing, A-B tests for bylines and photo use, related stories, viewers also read, tag generation

                                                                                                                                        I guess you have to disclose every single item on your new site that does anything like this. Any byte that touches a stochastic process is tainted forever.

                                                                                                                                        • catlifeonmars 6 hours ago

                                                                                                                                          Colloquially “AI” means LLMs and generative art. If you’re trying to make an argument by absurdity and you don’t want it to fall flat, maybe keep it relevant and don’t attack the straw man you just fabricated?

                                                                                                                                          • b40d-48b2-979e 19 hours ago

                                                                                                                                            None of those things are "AI" (LLMs). We had those things before, we'll have them after.

                                                                                                                                        • jacquesm a day ago

                                                                                                                                          Fully agreed.

                                                                                                                                          • infecto 21 hours ago

                                                                                                                                            Please no. I don’t want that kind of future. It’s going to be California cancer warnings all over again.

                                                                                                                                            I don’t like AI slop but this kind of legislation does nothing. Look at the low quality garbage that already exists, do we really need another step in the flow to catch if it’s AI?

                                                                                                                                            You legislate these problems away.

                                                                                                                                            • patrick451 20 hours ago

                                                                                                                                              Ideally, we would just ban AI content altogether.

                                                                                                                                              • Llamamoe 19 hours ago

                                                                                                                                                I don't think there's any way for that to happen, and IF we could create a solid legislative framework, AI could definitely (at some point in the future) contribute more good than bad to society.

                                                                                                                                            • TheAceOfHearts a day ago

                                                                                                                                              I'm worried that this will lead to a Prop 65 [0] situation, where eventually everything gets flagged as having used AI in some form. Unless it suddenly becomes a premium feature to have 100% human written articles, but are people really going to pay for that?

                                                                                                                                              > substantially composed, authored, or created through the use of generative artificial intelligence

                                                                                                                                              The lawyers are gonna have a field day with this one. This wording makes it seem like you could do light editing and proof-reading without disclosing that you used AI to help with that.

                                                                                                                                              [0] https://en.wikipedia.org/wiki/1986_California_Proposition_65

                                                                                                                                              • tokioyoyo a day ago

                                                                                                                                                At least it would be possible to autofilter everything out. Maybe market will somehow make it possible for non-AI content to get some spotlight because of that.

                                                                                                                                                • em500 a day ago

                                                                                                                                                  > I'm worried that this will lead to a Prop 65 [0] situation, where eventually everything gets flagged as having used AI in some form.

                                                                                                                                                  This is very predictably what's going to happen, and it will be just as useless as Prop 65 or the EU cookie laws or any other mandatory disclaimers.

                                                                                                                                                  • layer8 a day ago

                                                                                                                                                    The EU ePrivacy directive isn’t about disclaimers.

                                                                                                                                                    • consp a day ago

                                                                                                                                                      The problem is people believe it is. People believe the advertisement industry narrative they are force to show the insane screens and have to make it difficult. Yet they are not, and a reject all must be as easy as accept all (and "legitimate reasons" do not exist, they are either allowed uses and you don't have to ask or they are not).

                                                                                                                                                      • direwolf20 a day ago

                                                                                                                                                        Manufacturers also have the choice to avoid carcinogens but they don't.

                                                                                                                                                    • codewench a day ago

                                                                                                                                                      How is that useless? You adding the warning tells me everything I need to know.

                                                                                                                                                      Either you generated it with AI, in which case I can happily skip it, or you _don't know_ if AI was used, in which case you clearly don't care about what you produce, and I can skip it.

                                                                                                                                                      The only concern then is people who use AI and don't apply this warning, but given how easy it is to identify AI generated materials you just have to have a good '1-strike' rule and be judicious with the ban hammer.

                                                                                                                                                      • SkyBelow a day ago

                                                                                                                                                        Because you have to be able to prove it wasn't AI when the law is tested, and keeping records and proof you didn't use AI is going to be really difficult, if at all possible. For little people having fun, unless you poke the wrong bear, it won't matter. But for companies who are constantly the target of lawsuits, expect there to be a new field of unlabeled AI trolling comparable to patent trolling or similar.

                                                                                                                                                        We already see this with the California label, it get's applied to things that don't cause cancer because putting the label on is much cheaper than going through to the process to prove that some random thing doesn't cause cancer.

                                                                                                                                                        If the government showed up and claimed your comment was AI generated and you had to prove otherwise, how would you?

                                                                                                                                                        • shimman 16 hours ago

                                                                                                                                                          "One regulation was kinda bad, so we should never regulate anything again."

                                                                                                                                                          Good god, this is pathetic. Do you financially gain from AI or do you think it's hard to prove someone didn't use it? Like this is the bare minimum and you're throwing temper tantrums...

                                                                                                                                                          The onus will be on the AI companies pushing these wares to follow regulations. If it makes it harder for the end user to use these wares, well too bad so sad.

                                                                                                                                                          • SkyBelow 16 hours ago

                                                                                                                                                            >"One regulation was kinda bad, so we should never regulate anything again."

                                                                                                                                                            Please don't misrepresent what someone says. That does not lead to constructive dialog.

                                                                                                                                                            I gave a question challenging a specific way to regulate a specific thing, to indicate it is challenging. This is not the same as dismissing all regulations.

                                                                                                                                                            Also, please avoid the personal mentions.

                                                                                                                                                            >The onus will be on the AI companies pushing these wares to follow regulations.

                                                                                                                                                            That wasn't the challenge. The raised issue isn't AI companies labeling things AI. The given example included them very much following the regulation.

                                                                                                                                                    • mold_aid a day ago

                                                                                                                                                      I think a lot of people are asking the question around many digital services; I'm pretty sure in areas like education and media "no AI!" is going to be something that rich people look for, sure.

                                                                                                                                                      Editing and proofreading are "substantial" elements of authorship. Hope these laws include criminal penalties for "it's not just this - it's that!" "we seized Tony Dokoupil's computer and found Grammarly installed," right, straight to jail

                                                                                                                                                      • turtlesdown11 13 hours ago

                                                                                                                                                        seems like prop 65 works well

                                                                                                                                                        https://www.washingtonpost.com/climate-solutions/2025/02/12/...

                                                                                                                                                        > The study, published Wednesday in Environmental Science & Technology, found that California’s right-to-know law, also known as Proposition 65, has effectively swayed dozens of companies from using chemicals known to cause cancer, reproductive harm or birth defects.

                                                                                                                                                        ...

                                                                                                                                                        > Researchers interviewed 32 businesses from a variety of sectors including personal care, clothing and health care, concluding that the law has led manufacturers to remove toxic chemicals from their products. And the impact is significant: 78 percent of interviewees said Proposition 65 prompted them to reformulate their ingredients; 81 percent of manufacturers said the law tells them which chemicals to avoid; 69 percent said it promotes transparency about ingredients and the supply chain.

                                                                                                                                                      • dweekly 20 hours ago

                                                                                                                                                        I've begun an AI content disclosure working group at W3C if folks are interested in helping to craft a standard that allows websites to voluntarily disclose the degree to which AI was involved in creating all or part of the page. That would enable publishers to be compliant with this law as well as the EU AI Act's Article 50.

                                                                                                                                                        https://www.w3.org/community/ai-content-disclosure/

                                                                                                                                                        https://github.com/dweekly/ai-content-disclosure

                                                                                                                                                        • staticautomatic 17 hours ago

                                                                                                                                                          How does one get involved?

                                                                                                                                                          • dweekly 12 hours ago

                                                                                                                                                            Yay, I'd love to have your help!

                                                                                                                                                            1) Anyone can join the W3C group; you don't need to be a formal member of W3C!

                                                                                                                                                            2) What's dumb about the proposal itself? How could it better achieve its goals?

                                                                                                                                                            3) You can see some dialogue at https://github.com/WICG/proposals/issues/261 - what resonates and doesn't in the feedback and critique?

                                                                                                                                                        • NietTim 21 hours ago

                                                                                                                                                          New York also wants 3d printers to know when they are printing gun parts. Sure these initiatives have good meanings but also would only work when "the good ones" chose to label their content as AI generated/gun parts. There will _never_ be a 100% sure fire, non invasive, way to know if an article was (in part) AI generated or not, the same way that "2d printers" (lol) refuse to photo copy fiat currency, to circle back to the 3d printer argument.

                                                                                                                                                          IMO: it's already too late and effort should instead be focussed on recognition of this and quickly moving on to prevention through education instead of trying to smother it with legislation, it is just not going away.

                                                                                                                                                          • RobotToaster a day ago

                                                                                                                                                            I can see this ending up like prop65 warnings. Every website will have in the footer "this website may contain content known to the state of New York to be AI generated"

                                                                                                                                                            • Ir0nMan 19 hours ago

                                                                                                                                                              Let's make this a thing if the law passes.

                                                                                                                                                            • VMG a day ago

                                                                                                                                                              Step 2: outlets slap this disclaimer on all content, regardless of AI usage, making it useless

                                                                                                                                                              Step 3: regulator prohibits putting label on content that is not AI generated

                                                                                                                                                              Step 4: outlets make sure to use AI for all content

                                                                                                                                                              Let's call it the "Sesame effect"

                                                                                                                                                              • NicuCalcea a day ago

                                                                                                                                                                This would be an improvement in my book.

                                                                                                                                                                I'm a data journalist, and I use AI in some of my work (data processing, classification, OCR, etc.). I always disclose it in a "Methodology" section in the story. I wouldn't trust any reporting that didn't disclose the use of AI, and if an outlet slapped a disclaimer on their entire site, I wouldn't trust that outlet.

                                                                                                                                                                • terminalshort 17 hours ago

                                                                                                                                                                  So every time a reporter researches something and does a Google search and Gemini results pop up now AI use has to be added to the methodology section and basically 100% of all articles have the "AI use" label attached.

                                                                                                                                                                  • shimman 11 hours ago

                                                                                                                                                                    Yes because it would tell me immediately that the reporter can be safely ignored.

                                                                                                                                                                • jacquesm a day ago

                                                                                                                                                                  Or

                                                                                                                                                                  Step 1: those outlets that actually do the work see an increase in subscribers.

                                                                                                                                                                  • orwin a day ago

                                                                                                                                                                    Alternative timeline

                                                                                                                                                                    Step 2.5: 'unlike those news outlets, all our work is verified by humans'

                                                                                                                                                                    Step 3: work as intended.

                                                                                                                                                                  • kaicianflone 17 hours ago

                                                                                                                                                                    This feels like a symptom of a deeper issue: we’re treating AI outputs as if they’re authoritative when they’re really just single, unaccountable generations. Disclaimers help, but they don’t fix the decision process that produced the content in the first place.

                                                                                                                                                                    One approach we’ve been exploring is turning high-stakes AI outputs (like news summaries or classifications) into consensus jobs: multiple independent agents submit or vote under explicit policies, with incentives and accountability, and the system resolves the result before anything is published. The goal isn’t “AI is right,” but “this outcome was reached under clear rules and can be audited.”

                                                                                                                                                                    That kind of structure seems more scalable than adding disclaimers after the fact. We’re experimenting with this idea on an open source CLI at https://consensus.tools if anyone’s interested in the underlying mechanics.

                                                                                                                                                                    • CodingJeebus 17 hours ago

                                                                                                                                                                      I agree with the sentiment of this, but it makes one major assumption that I don't think will pass muster in the long run: that people generating output care enough themselves to do it "the right way". Many don't and never will.

                                                                                                                                                                      Low-effort content mills will never, ever care enough to generate more accurate, consensus-based output, especially if it adds complexity and cost to their workflows.

                                                                                                                                                                      > That kind of structure seems more scalable than adding disclaimers after the fact.

                                                                                                                                                                      Not if your goal as a business is to churn out slop as fast and cheaply as possible, and a whole lot of online content is like that. A disclaimer is warranted because you cannot force everyone to use the kinds of approaches that you're talking about. A ton of people who either don't know or don't care what they're putting out will inevitably exist.

                                                                                                                                                                    • wateralien a day ago

                                                                                                                                                                      They need to enforce this with very large fines.

                                                                                                                                                                      • delichon a day ago

                                                                                                                                                                        > In addition, the bill contains language that requires news organizations to create safeguards that protect confidential material — mainly, information about sources — from being accessed by AI technologies.

                                                                                                                                                                        So clawdbot may become a legal risk in New York, even if it doesn't generate copy.

                                                                                                                                                                        And you can't use AI to help evaluate which data AI is forbidden to see, so you can't use AI over unknown content. This little side-proposal could drastically limit the scope of AI usefulness over all, especially as the idea of data forbidden to AI tech expands to other confidential material.

                                                                                                                                                                        • InsideOutSanta a day ago

                                                                                                                                                                          This seems like common sense. I'm running OpenClaw with GLM-4.6V as an experiment. I'm allowing my friends to talk to it using WhatsApp.

                                                                                                                                                                          Even though it has been instructed to maintain privacy between people who talk to it, it constantly divulges information from private chats, gets confused about who is talking to it, and so on.^ Of course, a stronger model would be less likely to screw up, but this is an intrinsic issue with LLMs that can't be fully solved.

                                                                                                                                                                          Reporters absolutely should not run an instance of OpenClaw and provide it with information about sources.

                                                                                                                                                                          ^: Just to be clear, the people talking to it understand that they cannot divulge any actual private information to it.

                                                                                                                                                                        • nomercy400 a day ago

                                                                                                                                                                          You might as well place it next to the © 2026, on the bottom every page.

                                                                                                                                                                          • rektlessness a day ago

                                                                                                                                                                            Broad, ambiguous language like 'substantially composed by AI' will trigger overcompliance rendering disclosures meaningless, but maybe that was the plan.

                                                                                                                                                                            • rasjani a day ago

                                                                                                                                                                              Finnish public broadcasting company YLE has same rule. Even if they do cleanups of still images, they need to mark that article has AI generated content.

                                                                                                                                                                              • hellojesus 21 hours ago

                                                                                                                                                                                Do they find that fewer people read articles that were written by humans but have that label slapped on for the photo vs a baseline?

                                                                                                                                                                                If not: I suspect fewer people may care and so what's the point of the label?

                                                                                                                                                                                If so: why would they continue to use Ai solely to clean up photos?

                                                                                                                                                                              • nilslindemann 15 hours ago

                                                                                                                                                                                I support this for the same reason I want scripted reality TV shows to be labeled as such. Anything that claims to be reality but isn't should be clearly marked as such, unless it's obvious from the context.

                                                                                                                                                                                • ameliaquining 17 hours ago

                                                                                                                                                                                  I'm not convinced that this law, if it passed, would survive a court challenge on First Amendment grounds. U.S. constitutional law generally doesn't look kindly on attempts to regulate journalism.

                                                                                                                                                                                  • saghm 17 hours ago

                                                                                                                                                                                    OTOH any challenge that makes it all the way to the Supreme Court right now has as much chance of being a decision that completely ignores constitutional law as it does being decided based on that.

                                                                                                                                                                                  • chrisjj 21 hours ago

                                                                                                                                                                                    Why limit this to news? Equally deserving of protection is e.g. opinion.

                                                                                                                                                                                    • bluebxrry 21 hours ago

                                                                                                                                                                                      How about instead of calling Claude a clanker again, which he can't control, how about we give everyone a fair shot this time with a bill that requires the news to not suck in the first place.

                                                                                                                                                                                      • ddtaylor a day ago

                                                                                                                                                                                        Oregon kind of already has this they just don't enforce their laws.

                                                                                                                                                                                        • mothballed a day ago

                                                                                                                                                                                          Oregon and New York both are still trying to work their way up the 'rule of law' pyramid past the base level of stopping fetnanyl and meth heads from robbing convenience stores and parked cars. Every moment spent on enforcing AI disclaimers instead is an affront to the populace.

                                                                                                                                                                                          • ddtaylor 15 hours ago

                                                                                                                                                                                            I'm well aware. I left Oregon 3 weeks ago and live in DC. Junkies spit on your wife and children while the police watch and say "not enough spit landed" and refuse to arrest anyone. When your property is stolen they will look at you like you're crazy for wanting it investigated or recovered. It has a tracker in it and the device says it's in that pile of stolen stuff near those tweakers. Good riddance Oregon!

                                                                                                                                                                                            • Der_Einzige 14 hours ago

                                                                                                                                                                                              Don't let the door hit your ass on the way out. Oregonians have more personal freedom than almost any other state. Ron Wyden is by far the best sitting senator. It's a remarkably pro-gun state for being so left wing. Just use your tech bro money and live in a suburb. Cheapest weed in America. legalish shrooms. It's harder here than almost anywhere else for cops to fk your life up. Good.

                                                                                                                                                                                              Also it's objectively very low on violent crime, and the problems you talk about are in every PNW city, i.e. SF, Portland, Vancouver (WA), Vancouver (BC), Seattle. They're also the places where all the innovation, including AI (despite the non SF cities hatred for it), is happening.

                                                                                                                                                                                              • ddtaylor 11 hours ago

                                                                                                                                                                                                That personal freedom comes at a cost to the safety of your family.

                                                                                                                                                                                                Property crime rates skyrocketed.

                                                                                                                                                                                                Most stores have to have locked port access.

                                                                                                                                                                                                Go walk around Salem or Portland and access some public places. It's not safe to do so and anyone can verify that in seconds.

                                                                                                                                                                                                I have actually a much better understanding now in DC of how people can actually be compassionate. What Oregon is doing is not compassion, it's actively causing harm and destroying thirty thousand lives as we speak right now and climbing. Those are real people dying on your sidewalks in front of you. Someone else's daughter or son.

                                                                                                                                                                                                The environment has been distorted and the problem is bigger now than the appetite for compassion.

                                                                                                                                                                                                I hope you stay safe.

                                                                                                                                                                                        • talkingtab 18 hours ago

                                                                                                                                                                                          I'm beginning to suspect HN also needs such a bill. Maybe it is not AI content, but so many prominent posts on HN feel like advertising. Perhaps that is the good thing about AI is that it decreases the trust level. Or is that really a good thing?

                                                                                                                                                                                          [Edit: spelling sigh]

                                                                                                                                                                                          • python999 11 hours ago

                                                                                                                                                                                            How about disclaimers on AI-generated legislation?

                                                                                                                                                                                            • cmiles8 a day ago

                                                                                                                                                                                              This is a good idea. Although most AI written content is also stroll pretty obvious. It consistently has a certain feel that just seems off.

                                                                                                                                                                                              • sidrag22 21 hours ago

                                                                                                                                                                                                Maybe for articles... and people who seem to think copy pasting a basic gpt response to generate a linkedin lunatic style post is passing anyone familiar with AI generated responses sniff tests...

                                                                                                                                                                                                But i wouldn't be surprised to see a massive % of comments that I don't instantly attribute to AI, actually being AI. RP prompts are just so powerful, and even my local mediocore model coulda wrote 100 comments in the time its taking me to write this one.

                                                                                                                                                                                                all humans are pattern seeking to a fault, the amount of people even in this community that will not consider something AI generated just because it doesnt have emdashes or emojis is probably pretty high.

                                                                                                                                                                                                • chrisjj 21 hours ago

                                                                                                                                                                                                  > Although most AI written content is also stroll pretty obvious. It consistently has a certain feel that just seems off.

                                                                                                                                                                                                  I think you're saying "AI" written content having a certain feel that just seems off is obviously "AI" written content.

                                                                                                                                                                                                  Yes. But you've know way of knowing that's most. There could be 10x more that we don't detect.

                                                                                                                                                                                                • TuringNYC 21 hours ago

                                                                                                                                                                                                  What happens if I use linear regression on a chart? Where does one draw the line on "AI"?

                                                                                                                                                                                                  • cardanome 20 hours ago

                                                                                                                                                                                                    Obviously people mean LLMs these days when talking about AI. Don't be obtuse.

                                                                                                                                                                                                  • asah a day ago

                                                                                                                                                                                                    We've seen this movie - see California prop 65 warnings on literally every building.

                                                                                                                                                                                                    It also doesn't work to penalize fraudulent warnings - they simply include a harmless bit of AI to remain in compliance.

                                                                                                                                                                                                    • NitpickLawyer a day ago

                                                                                                                                                                                                      > It also doesn't work to penalize fraudulent warnings

                                                                                                                                                                                                      How would you classify fraudulent warnings? "Hey chatgpt, does this text look good to you? LGTM. Ship it".

                                                                                                                                                                                                      • kevin_thibedeau 17 hours ago

                                                                                                                                                                                                        The smell of Harbor Freight stores from before prop 65 compared to now indicates that it did work.

                                                                                                                                                                                                      • hsuduebc2 10 hours ago

                                                                                                                                                                                                        I don’t know whether it’s ignorance of the subject or a sense of exceptionalism, but it’s striking that politicians who write this kind of legislation don’t spend even five minutes checking whether such a law is actually enforceable. I would genuinely like to see how they plan to enforce this in practice. At best, it seems like it could only work through reports and multiple witness statements.

                                                                                                                                                                                                        • jaredklewis 17 hours ago

                                                                                                                                                                                                          This is so dumb. Name literally any problem caused by AI generated content (there are dozens to choose from) and I will explain why this law will make absolutely no impact on that issue.

                                                                                                                                                                                                          Now articles from organizations with legitimate journalists and fact checkers like the NYT, WSJ, or the economist will need an “AI generated” badge because they used an AI assistant and they have risk adverse legal departments. This will be gleefully pointed out by every brain dead Twitter conspiracy theorist, Breitbart columnist, 911 truther substack writer, and Russian spam bot as they happily spew unbadged drivel out into the world. Thanks so much New York!

                                                                                                                                                                                                          AI doesn’t make bad news content. Complete disregard for objective reality does. I’ll take an ai assisted human that actually cares about truth over an unassisted partisan hooligan every time.

                                                                                                                                                                                                          If this is the best our legislatures can come up with we are so utterly fucked…

                                                                                                                                                                                                          • nh43215rgb a day ago

                                                                                                                                                                                                            Federal level would be the best, but this is a start.

                                                                                                                                                                                                            • kgwxd 21 hours ago

                                                                                                                                                                                                              AI Generated or News? You can't have both.

                                                                                                                                                                                                              • seydor a day ago

                                                                                                                                                                                                                That's the equivalent of having a disclaimer "This article was written using MS Word". Utterly useless in this day and age

                                                                                                                                                                                                                • ericzawo 17 hours ago

                                                                                                                                                                                                                  Good.

                                                                                                                                                                                                                  • PlatoIsADisease a day ago

                                                                                                                                                                                                                    In 10-20 years all this AI disclaimer stuff is going to be like 'don't use wikipedia, it could lie!'

                                                                                                                                                                                                                    Status Quo Bias is a real thing, and we are seeing those people in meltdown with the world changing around them. They think avoiding AI, putting disclaimers on it, etc... will matter. But they aren't being rational, they are being emotional.

                                                                                                                                                                                                                    The economic value is too high to stop and the cat is out of the bag with 400B models on local computers.

                                                                                                                                                                                                                    • jacquesm a day ago

                                                                                                                                                                                                                      I don't think that's true. The 'this battle is already over' attitude is the most defeatist strategy possible. It's effectively complying in advance, rolling over before you've attempted to create the best possible outcome.

                                                                                                                                                                                                                      With that attitude we would not have voting, human rights (for what they're worth these days), unions, a prohibition on slavery and tons of other things we take for granted every day.

                                                                                                                                                                                                                      I'm sure AI has its place but to see it assume the guise of human output without any kind of differentiating factor has so many downsides that it is worth trying to curb the excesses. And news articles in particular should be free from hallucinations because they in turn will cause others to pass those on. Obviously with the quality of some publications you could argue that that is an improvement but it wasn't always so and a free and capable press is a precious thing.

                                                                                                                                                                                                                      • mikkupikku 20 hours ago

                                                                                                                                                                                                                        > With that attitude we would not have voting, human rights (for what they're worth these days), unions, a prohibition on slavery and tons of other things we take for granted every day.

                                                                                                                                                                                                                        None of these things were rolling back a technology. History shows that technology is a ratchet, the only way to get rid of a technology is social collapse or surplanting the technology with something even more useful or at the very least approximately as useful but safer.

                                                                                                                                                                                                                        Once a technology has proliferated, it's a fiat accompli. You can regulate the technology but turning the clock back isn't going to happen.

                                                                                                                                                                                                                        • jacquesm 19 hours ago

                                                                                                                                                                                                                          We have plenty of examples of regulated technology.

                                                                                                                                                                                                                          And usually the general public does not have a direct stake in the outcome (ok, maybe broadcast spectrum regulation should be mentioned there), but this time they do and given what's at stake it may well be worth trying to define what a good set of possible outcomes would be and how to get there.

                                                                                                                                                                                                                          As I mentioned above and which TFA is all about, the press for instance could be held to a standard that they have shown they can easily meet in the past.

                                                                                                                                                                                                                          • mikkupikku 15 hours ago

                                                                                                                                                                                                                            As I said, technology can be regulated. And this technology will be regulated.

                                                                                                                                                                                                                            However the technology is nonetheless here to stay, until it's replaced with something better.

                                                                                                                                                                                                                        • terminalshort 17 hours ago

                                                                                                                                                                                                                          Well they aren't free from hallucinations with human authors. Not to long ago there was an outbreak of articles in the "reputable" mainstream press claiming that there was a foiled terrorist plot against the UN which was actually (and obviously) a garden variety SMS fraud operation. Why should I care if it's AI lying to me next time rather than the constant deluge of humans lying to me?

                                                                                                                                                                                                                        • Llamamoe a day ago

                                                                                                                                                                                                                          AI-written articles tend to be far more regurgitative, lower in value, and easier to ghostwrite with intent to manipulate the narrative.

                                                                                                                                                                                                                          Economic value or not, AI-generated content should be labeled, and trying to pass it as human-written should be illegal, regardless of how used to AI content people do or don't become.

                                                                                                                                                                                                                          • RobotToaster a day ago

                                                                                                                                                                                                                            My theory is that AI writes the way it does because it was trained on a lot of modern (organic) journalism.

                                                                                                                                                                                                                            So many words to say so little, just so they can put ads between every paragraph.

                                                                                                                                                                                                                            • charcircuit a day ago

                                                                                                                                                                                                                              That is low quality articles in general. Have you never seen how hundreds of news sites will regurgitate the same story of another. This was happening long before AI. High quality AI written articles will still be high value.

                                                                                                                                                                                                                              • orwin a day ago

                                                                                                                                                                                                                                Did you go on grokipedia at release? I still sometimes loose myself reading stuff on Wikipedia, I guarantee you that this can't happen on grok, so much noise between facts it's hard to enjoy.

                                                                                                                                                                                                                                • charcircuit a day ago

                                                                                                                                                                                                                                  Yes I did go immediately on release. I was finally able to correct articles that have been inaccurate on Wikipedia for years.

                                                                                                                                                                                                                                  • orwin 18 hours ago

                                                                                                                                                                                                                                    So you noticed how poor the prose was? Really unbearable to read.

                                                                                                                                                                                                                                    • charcircuit 17 hours ago

                                                                                                                                                                                                                                      I found it fine to read and it handled controversial subjects much better than Wikipedia.

                                                                                                                                                                                                                                      • orwin 12 hours ago

                                                                                                                                                                                                                                        I don't care about that, that wasn't the point, no one truly care about that. I wanted to know if the feeling of reading meandering writing that can't go to the point when reading AI-generated content was only mine, or if other people who "wiki walk" a lot did the same on Grokipedia (basically spend hours clicking on links and reading random pages). I didn't manage to do it because the writing was too "bad" for me (and i was taken by wiki walk on wookiepedia once, so my tolerance is high). I just wanted to know if it was shared. Did you wiki walk on grokipedia, or do you just use it for "controversial subjects"?

                                                                                                                                                                                                                                        • charcircuit 9 hours ago

                                                                                                                                                                                                                                          I don't know what wiki walk is. I don't often use grokipedia since I can just prompt an LLM directly, which may in turn extract information from grokipedia.

                                                                                                                                                                                                                            • duskdozer a day ago

                                                                                                                                                                                                                              Current AI use is heavily subsidized; we will see how much value there actually is when it comes time to monetize.

                                                                                                                                                                                                                              • bigstrat2003 5 hours ago

                                                                                                                                                                                                                                > In 10-20 years all this AI disclaimer stuff is going to be like 'don't use wikipedia, it could lie!'

                                                                                                                                                                                                                                My dude, Wikipedia is still problematic as a source of information. That isn't the ridiculous comparison you intended it to be.

                                                                                                                                                                                                                                • simion314 a day ago

                                                                                                                                                                                                                                  Emotional my ass, just have websites and social media give me a filter to hide AI stuff , I can't enjoy a video , post or story anymore since I always doubt it is real, if I am part of a minority this filter should not hit the budget of companies and would encourage real people generated content if we are larger then a dozen people.

                                                                                                                                                                                                                                  • wiseowise a day ago

                                                                                                                                                                                                                                    > But they aren't being rational, they are being emotional.

                                                                                                                                                                                                                                    When your mind is so fried on slop that you start to write like one.

                                                                                                                                                                                                                                    > The economic value is too high to stop and the cat is out of the bag with 400B models on local computers.

                                                                                                                                                                                                                                    Look at all this value created like *checks notes* scam ads, apps that undress women and teenage girls, tech bros jerking each other off on twitter, flooding open source with tsunami of low quality slop, inflating chip prices, thousands are cut off in cost savings and dozens more.

                                                                                                                                                                                                                                    Cat is out of the bag for sure.

                                                                                                                                                                                                                                    • mikkupikku a day ago

                                                                                                                                                                                                                                      You may not like it, but this is what peak economic performance looks like.

                                                                                                                                                                                                                                      • blibble 20 hours ago

                                                                                                                                                                                                                                        no, this is what massive subsidy looks like

                                                                                                                                                                                                                                  • bill_joy_fanboy a day ago

                                                                                                                                                                                                                                    LOL! As if human-generated news content is any more honest or accurate...

                                                                                                                                                                                                                                    • charcircuit a day ago

                                                                                                                                                                                                                                      So literally every article will be labeled as AI assisted and it will be meaningless.

                                                                                                                                                                                                                                      >The use of generative artificial intelligence systems shall not result in: (i) discharge, displacement or loss of position

                                                                                                                                                                                                                                      Being able to fire employees is a great use of AI and should not be restricted.

                                                                                                                                                                                                                                      > or (ii) transfer of existing duties and functions previously performed by employees or worker

                                                                                                                                                                                                                                      Is this saying you can't replace an employee's responsibilities with AI? No wonder the article says it is getting union support.

                                                                                                                                                                                                                                      • nosianu a day ago

                                                                                                                                                                                                                                        > So literally every article will be labeled as AI assisted and it will be meaningless.

                                                                                                                                                                                                                                        The web novel website RoyalRoad has two different tags that stories can/should add: AI-Assisted and AI-Generated.

                                                                                                                                                                                                                                        Their policy: https://www.royalroad.com/blog/57/royal-road-ai-text-policy

                                                                                                                                                                                                                                        > In this policy, we are going to separate the use of AI for text, into 3 categories: General Assistive Technologies, AI-Assisted, AI-Generated

                                                                                                                                                                                                                                        The first category does not require tagging the story, only the other two do.

                                                                                                                                                                                                                                        > The new tags are as such:

                                                                                                                                                                                                                                        > AI-Assisted: The author has used an AI tool for editing or proofreading. The story thus reflects the author’s creativity and structure, but it may use the AI’s voice and tone. There may be some negligible amount of snippets generated by AI.

                                                                                                                                                                                                                                        > AI-Generated: The story was generated using an AI tool; the author prompted and directed the process, and edited the result.

                                                                                                                                                                                                                                        • nicoburns a day ago

                                                                                                                                                                                                                                          > So literally every article will be labeled as AI assisted and it will be meaningless.

                                                                                                                                                                                                                                          That at might at least offer an opportunity for a news source to compete on not being AI-generated. I would personally be willing to pay for information sources that exclude AI-generated content.

                                                                                                                                                                                                                                          • westmeal a day ago

                                                                                                                                                                                                                                            How would you feel if an AI hallucinated and fired you from your job?

                                                                                                                                                                                                                                            • charcircuit a day ago

                                                                                                                                                                                                                                              I would feel that I must not have been documenting my value as good as I could have been and would try and do so better at my next job.

                                                                                                                                                                                                                                              • westmeal 19 hours ago

                                                                                                                                                                                                                                                That's genuinely insane but you're entitled to your opinion.

                                                                                                                                                                                                                                            • mcpar-land 19 hours ago

                                                                                                                                                                                                                                              > Being able to fire employees is a great use of AI and should not be restricted.

                                                                                                                                                                                                                                              Can you elaborate on this?

                                                                                                                                                                                                                                              • charcircuit 17 hours ago

                                                                                                                                                                                                                                                There is less bias being able to have an AI measure who is being productive and who isn't. Getting signal from AI when measuring performance to know when to fire people will be a valuable signal.

                                                                                                                                                                                                                                                • mcpar-land 15 hours ago

                                                                                                                                                                                                                                                  Do you think that LLMs are not biased / less biased than humans?

                                                                                                                                                                                                                                                  • charcircuit 9 hours ago

                                                                                                                                                                                                                                                    They have their own biases, but it's consistent.

                                                                                                                                                                                                                                              • vincnetas a day ago

                                                                                                                                                                                                                                                objects in the mirror are closer than they appear.