« BackGavin Newsom vetoes SB 1047wsj.comSubmitted by atlasunshrugged 5 hours ago
  • worstspotgain 3 hours ago

    Excellent move by Newsom. We have a very active legislature, but it's been extremely bandwagon-y in recent years. I support much of Wiener's agenda, particularly his housing policy, but this bill was way off the mark.

    It was basically a torpedo against open models. Market leaders like OpenAI and Anthropic weren't really worried about it, or about open models in general. Its supporters were the also-rans like Musk [1] trying to empty out the bottom of the pack, as well as those who are against any AI they cannot control, such as antagonists of the West and wary copyright holders.

    [1] https://techcrunch.com/2024/08/26/elon-musk-unexpectedly-off...

    • dragonwriter 2 hours ago

      > Excellent move by Newsom. [...] It was basically a torpedo against open models.

      He vetoed it in part because the threshold it applies to at all are well-beyond any current models, and he wants something that will impose greater restrictions on more and much smaller/lower-training-compute models that this would have left alone entirely.

      > Market leaders like OpenAI and Anthropic weren't really worried about it, or about open models in general.

      OpenAI (along with Google and Meta) led the institutional opposition to the bill, Anthropic was a major advocate for it.

      • worstspotgain 2 hours ago

        > He vetoed it in part because the threshold it applies to at all are well-beyond any current models, and he wants something that will impose greater restrictions on more and much smaller/lower-training-compute models that this would have left alone entirely.

        Well, we'll see what passes again and when. By then there'll be more kittens out of the bag too.

        > Anthropic was a major advocate for it.

        I don't know about being a major advocate, the last I read was "cautious support" [1]. Perhaps Anthropic sees Llama as a bigger competitor of theirs than I do, but it could also just be PR.

        [1] https://thejournal.com/articles/2024/08/26/anthropic-offers-...

      • SonOfLilit 3 hours ago

        why would Google, Microsoft and OpenAI oppose a torpedo against open models? Aren't they positioned to benefit the most?

        • benreesman 3 hours ago

          Some laws are just bad. When the API-mediated/closed-weights companies agree with the open-weight/operator-aligned community that a law is bad, it’s probably got to be pretty awful. That said, though my mind might be playing tricks on me, I seem to recall the big labs being in favor at one time.

          There are a number of related threads linked, but I’ll personally highlight Jeremy Howard’s open letter as IMHO the best-argued case against SB 1047.

          https://www.answer.ai/posts/2024-04-29-sb1047.html

          • SonOfLilit 2 hours ago

            > The definition of “covered model” within the bill is extremely broad, potentially encompassing a wide range of open-source models that pose minimal risk.

            Who are these wide range of >$100mm open source models he's thinking of? And who are the impacted small businesses that would be scared to train them (at a cost of >$100mm) without paying for legal counsel?

          • CSMastermind 3 hours ago

            The bill included language that required the creators of models to have various "safety" features that would severely restrict their development. It required audits and other regulatory hurdles to build the models at all.

            • llamaimperative 3 hours ago

              If you spent $100MM+ on training.

              • gdiamos 3 hours ago

                Advanced technology will drop the cost of training.

                The flop targets in that bill would be like saying “640KB of memory is all we will ever need” and outlawing anything more.

                Imagine what other countries would have done to us if we allowed a monopoly like that on memory in 1980.

                • llamaimperative 3 hours ago

                  No, there are two thresholds and BOTH must be met.

                  One of those is $100MM in training costs.

                  The other is measured in FLOPs but is already larger than GPT-4, so the “think of the small guys!” argument doesn’t make much sense.

                  • gdiamos an hour ago

                    Cost as a perf metric is meaningless and the history of computer benchmarks has repeatedly proven this point.

                    There is a reason why we report time (speedup) in spec instead of $$

                    The price you pay depends on who you are and who is giving it to you.

                    • llamaimperative 43 minutes ago

                      That’s why there are two thresholds.

                    • gdiamos an hour ago

                      Tell that to me when we get to llama 15

                      • llamaimperative 43 minutes ago

                        What?

                        • gdiamos 22 minutes ago

                          “But the big guys are struggling getting past 100KB, so ‘think of the small guys’ doesn’t make sense when the limit is 640KB.”

                          How do people on a computer technology forum ignore the 10,000x improvement in computers over 30 years due to advances in computer technology?

                          I could understand why politicians don’t get it.

                          I should think that computer systems companies would be up in arms over SB 1047 in the same way they would be if the government was thinking of putting a cap on hard drives bigger than 1 TB.

                          It puts a cap on flops. Isn’t the biggest company in the world in the business of selling flops?

                          • gdiamos 4 minutes ago

                            If your goal is to lift the limit, why put it in?

                            How would any computer industry accept a government mandated limit on perf?

                            Should NVIDIA accept a limit on flops?

                            Should Pure accept a limit on TBs?

                            Should Samsung accept a limit on HBM bandwidth?

                            Should Arista accept a limit on link bandwidth?

                            I don’t think that there is enough awareness that scaling laws tie intelligence to these HW metrics. Enforcing a cap on intelligence is the same thing as a cap on these metrics.

                            https://en.m.wikipedia.org/wiki/Neural_scaling_law

                            Has this legislation really thought through the implications of capping technology metrics, especially in a state where most of the GDP is driven by these metrics?

                            • llamaimperative 15 minutes ago

                              It would be crazy if the bill had a built-in mechanism to regularly reassess both the cost and FLOP thresholds… which it does.

                              Inversely to your sarcastic “understanding” about politicians’ stupidity, I can’t understand how tech people seem incapable or unwilling to actually read the legislation they have such strong opinions about.

                  • wslh an hour ago

                    All that means that the barriers for entry for startups skyrocket.

                    • SonOfLilit an hour ago

                      Startups that spend >$100mm on one training run...

                      • wslh 22 minutes ago

                        There are startups and startups, the ones that you read on media are just a fraction of the worldwide reality.

                  • worstspotgain 3 hours ago

                    If there was just one quasi-monopoly it would have probably supported the bill. As it is, the market leaders have the competition from each other to worry about. Getting rid of open models wouldn't let them raise their prices much.

                    • SonOfLilit 3 hours ago

                      So if it's not them, who is the hidden commercial interest sponsoring an attack on open source models that cost >$100mm to train? Or does Wiener just genuinely hate megabudget open source? Or is it an accidental attack, aimed at something else? At what?

                      • worstspotgain 2 hours ago

                        Like I said, supporters included wary copyright holders and the bottom-market also-rans like Musk. If your model is barely holding up against Llama, what's the point of staying in.

                        • SonOfLilit 2 hours ago

                          And two of the three godfathers of AI, and all of the AI notkillaboutism crowd.

                          Actually, wait, if Grok is losing to GPT, why would Musk care about Llama more than Altman? Llama hurts his competitor...

                          • worstspotgain 2 hours ago

                            The market in my argument looks like OpenAI ~ Anthropic > Google >>> Meta (~ or maybe >) Musk/Alibaba. The top 3 aren't worried about the down-market stuff. You're free to disagree of course.

                    • wrsh07 2 hours ago

                      I would note that Facebook and Google were opposed to eg gdpr although it gave them a larger share of the pie.

                      When framed like that: why be opposed, it hurts your competition? The answer is something like: it shrinks the pie or reduces the growth rate, and that's bad (for them and others)

                      The economics of this bill aren't clear to me (how large of a fine would Google/Microsoft pay in expectation within the next ten years?), but they maybe also aren't clear to Google/Microsoft (and that alone could be a reason to oppose)

                      Many of the ai safety crowd were very supportive, and I would recommend reading Zvi's writing on it if you want their take

                      • hn_throwaway_99 3 hours ago

                        Yeah, I think the argument that "this just hurts open models" makes no sense given the supporters/detractors of this bill.

                        The thing that large companies care the most about in the legal realm is certainty. They're obviously going to be a big target of lawsuits regardless, so they want to know that legislation is clear as to the ways they can act - their biggest fear is that you get a good "emotional sob story" in front of a court with a sympathetic jury. It sounded like this legislation was so vague that it would attract a hoard of lawyers looking for a way they can argue these big companies didn't take "reasonable" care.

                        • SonOfLilit 3 hours ago

                          Sob stories are definitely not covered by the text of the bill. The "critical harm" clause (ctrl-f this comment section for a full quote) is all about nuclear weapons and massive hacks and explicitly excludes "just" someone dying or getting injured with very clear language.

                      • Cupertino95014 3 hours ago

                        > We have a very active legislature, but it's been extremely bandwagon-y in recent years

                        "It's been a clown show."

                        There. Fixed it for you.

                        • arduanika 2 hours ago

                          Come on, we're trying to have a productive discussion here. There's no need to just drop in and insult clowns.

                          • labster 10 minutes ago

                            To be fair, clowning around is a lot more tractable than homelessness, housing prices, health care, or immigration.

                      • Lonestar1440 3 hours ago

                        This is no way to run a state. The Democrat-dominated legislature passes everything that comes before it (and rejects anything that the GOP touches, in committee) and then the Governor needs to veto the looniest 20% of them to keep us from falling into total chaos. This AI bill was far from the worst one.

                        "Vote out the legislators!" but for who... the Republican party? And we don't even get a choice on the general ballot most of the time, thanks to "Open Primaries".

                        It's good that Newsom is wise enough to muddle through, but this is an awful system.

                        https://www.pressdemocrat.com/article/news/california-gov-ne...

                        • rootusrootus an hour ago

                          The subtle rebranding of Democratic party to democrat party is a pretty strong tell for highly partisan perspective. How does California compare with similarly large Republican-dominated states? Anecdotally, I’ve seen a lot of really bad legislation originating from any legislature that has no meaningful opposition.

                          • anigbrowl 38 minutes ago

                            It's such a long-running thing that it's hard to gauge whether it's deliberate or just loose usage.

                            https://en.wikipedia.org/wiki/Democrat_Party_(epithet)

                            • dredmorbius 32 minutes ago

                              It's rather decidedly a dog whistle presently.

                              • jimmygrapes 29 minutes ago

                                The party isn't doing much lately to encourage the actual democracy part of the name, other than whining about national popular vote every 4 years knowing full well that's now how that process works.

                            • thinkingtoilet 2 hours ago

                              If California was it's own country, it would be one of the biggest most successful countries in the world. Like every where else it has it's problems but it's being run just fine. Objectively, there are many states that are far worse off in any key metric.

                              • toephu2 2 hours ago

                                > but it's being run just fine

                                As a Californian I have to disagree. The only reason you think it's being run just fine is because of the success of the private sector. The only reason California would be the 4th/5th largest economy in the world is because of the the tech industry and other industries that are in California (Hollywood, agriculture, etc). It's not because we have some awesome efficiently run state government.

                                • WWLink 2 hours ago

                                  What are you getting at? Is a state government supposed to be profitable? LOL

                                  • nashashmi 2 hours ago

                                    Do you mean to say that the government was deeply underwater a few years ago? And the state marred by forest fires that it was frightening to see if it could ever come back ?

                                  • kortilla an hour ago

                                    What is success in your metric? are you just counting GDP of companies that happen to be located there? If so, that has very little relationship to how well the state is being run.

                                    It’s very easy to make arguments that they are successful in spite of a terribly run state government and are being protected by federal laws keeping the loonies in check (interstate commerce clause, etc).

                                    • peter422 an hour ago

                                      So your argument is that the good things about the state have nothing to do with the governance, but all the bad things do? Just want to make sure I get your point.

                                      Also, I'd argue that if you broke down the contributions to the state's rules and regulations from the local governments, the ballot initiatives and the state government, the state government is creating the most benefit and least harm of the 3.

                                    • tightbookkeeper an hour ago

                                      In this case the success is in spite of the governance rather than because of it.

                                      The golden age of California was a long time ago.

                                      • dmix an hour ago

                                        California was extremely successful for quite some time. They benefited from a large population boom and lots of industry developed or moved there. And surprisingly they were a republican state from 1952 -> 1988.

                                      • ken47 2 hours ago

                                        You're going to attribute even a small % of this to politicians rather than the actual innovators? Sure, then let's say they're responsible for some small % of its success. They're smart enough to not nuke their own economy.

                                        • LeroyRaz 2 hours ago

                                          The state has one of the highest illiteracy rates in the whole country (28%). To me, that implies they have some issue of governance.

                                          Source: https://worldpopulationreview.com/state-rankings/us-literacy...

                                          To be fair in the comparison, the literacy statistics for the whole of the US are pretty shocking from a European perspective.

                                          • 0_____0 2 hours ago

                                            The data you're showing doesn't appear to differentiate between "Can read English" and "Can read in some language". Big immigrant population, same with New York. Having grown up in California I can tell you that there aren't 28% of kids coming out of public school who can't read anything.

                                            Edit to add: my own hometown had a lot of people who couldn't speak English. Lots of elderly mothers of Chinese immigrants whose adult children were in STEM and whose own kids were headed to uni. Not to say that's representative, but consider that a single percentage stat won't give you an accurate picture of what's going on.

                                            • kortilla an hour ago

                                              Not being able to read English in the US is bad though. It makes you a very inefficient citizen even though you can get by. Being literate in Chinese and not being able to read or even speak English is far worse than an illiterate person that can speak English in day to day interactions.

                                              • t-3 an hour ago

                                                The US has no official language. There are fluency requirements for the naturalized citizenship test, but those can be waived with 20 years of permanent residency. Citizens are under no obligation to be efficient for the sake of the government.

                                                • swasheck an hour ago

                                                  which is why the statistics need to be careful annotated. lacking literacy at all is a different dimension than lacking fluency the national lingua franca

                                              • rootusrootus an hour ago

                                                Maybe there is something missing from your analysis? By most metrics the US compares quite favorably to Europe. When you see something that seems like an outlier, perhaps turn down the arrogance and try to understand what you might be overlooking.

                                                • LeroyRaz 15 minutes ago

                                                  I don't know what your source for "by most metrics" is?

                                                  As I understand it, the US is abysmal by many metrics (and also exceptional by others). E.g., murder rated and prison rates are exceptionally high in the US compared to Europe. Homelessness rates are exceptionally high in the US compared to Europe. Startup rates are (I believe) exceptionally high in the US compared to Europe.

                                                • hydrox24 2 hours ago

                                                  For any others reading this, the _illiteracy_ rate is 23.1% in California according to the parent's source. This is indeed the highest illiteracy rate in the US thought.

                                                  Having said that, I would have thought this was partially a measure of migration. Perhaps illegal migration?

                                                  • Eisenstein an hour ago

                                                    The "medium to high English literacy skills" is the part that is important. If you can read and write Chinese and Spanish and French and Portuguese and Esperanto at a high level, but not English at a medium to high level, you are 'illiterate' in this stat.

                                                • hbarka an hour ago

                                                  High speed trains would do even more for California and would be the envy of the rest of the country.

                                                  • oceanplexian 27 minutes ago

                                                    Like most things, the facts bear out the exact opposite. The CA HSR has been such a complete failure that it’s probably set back rail a decade or more. The only saving grace is Florida’s privatized high speed rail, otherwise it would be a completely failed industry.

                                                  • cscurmudgeon 2 hours ago

                                                    California is the largest recipient of federal money.

                                                    https://usafacts.org/articles/which-states-rely-the-most-on-...

                                                    (I know by population it will be different, but the argument here is around 'one of the the biggest' which is not a per capita statement.)

                                                    > Objectively, there are many states that are far worse off in any key metric

                                                    You can apply the same logic to USA.

                                                    USA is one of the biggest most successful countries in the world. Like every where else it has it's problems but it's being run just fine. Objectively, there are many countries that are far worse off in any key metric.

                                                    • anigbrowl 31 minutes ago

                                                      California is also the largest source of Federal revenue: https://www.irs.gov/statistics/soi-tax-stats-gross-collectio...

                                                      As your link shows, a much smaller %age of CA government revenue comes from the federal government vs most other states; in that sense California is a net contributor rather than a net taker.

                                                  • dehrmann an hour ago

                                                    Not sure of Newsom is actually wise enough or if his presidential ambitions moderate his policies.

                                                    • dredmorbius 2 hours ago

                                                      Thankfully there are no such similarly single-party states elsewhere in the Union dominated by another party, and if they were, their executives would similarly veto the most inane legislation passed.

                                                      </s>

                                                      • dyauspitr 2 hours ago

                                                        Compared to whom? What is this hypothetical well run state. Because it’s hard to talk shit against the state that has the 8th largest economy on the world nation state economy ranking.

                                                      • tbrownaw 4 hours ago

                                                        https://legiscan.com/CA/text/SB1047/id/3019694

                                                        So this is the one that would make it illegal to provide open weights for models past a certain size, would make it illegal to sell enough compute power to train such a model without first verifying that your customer isn't going to train a model and then ignore this law, and mandates audit requirements to prove that your models won't help people cause disasters and can be turned off.

                                                        • akira2501 3 hours ago

                                                          > and mandates audit requirements to prove that your models won't help people cause disasters

                                                          Audits cannot prove anything and they offer no value when planning for the future. They're purely a retrospective tool that offers insights into potential risk factors.

                                                          > and can be turned off.

                                                          I really wish legislators would operate inside reality instead of a Star Trek episode.

                                                          • Loughla 3 hours ago

                                                            >I really wish legislators would operate inside reality instead of a Star Trek episode.

                                                            What are your thoughts about businesses like Google and Meta providing guidance and assistance to legislators?

                                                            • akira2501 2 hours ago

                                                              If it happens in a public and open session of the legislature with multiple other sources of guidance and information available then that's how it's supposed to work.

                                                              I suspect this is not how the majority of "guidance" is actually being offered. I also guess this is probably a really good way to find new sources of campaign "donations." It's also a really good way for monopolistic players to keep a strangle hold on a nascent market.

                                                            • lopatin 3 hours ago

                                                              > Audits cannot prove anything and they offer no value when planning for the future. They're purely a retrospective tool that offers insights into potential risk factors.

                                                              What if it audits your deploy and approval processes? They can say for example, that if your AI deployment process doesn't include stress tests against some specific malicious behavior (insert test cases here) then you are in violation of the law. That would essentially be a control on all future deploys.

                                                              • trog 3 hours ago

                                                                > Audits cannot prove anything and they offer no value when planning for the future. They're purely a retrospective tool that offers insights into potential risk factors.

                                                                Uh, aren't potential risk factors things you want to consider when planning for the future?

                                                                • whimsicalism 3 hours ago

                                                                  This snide dismissiveness around “sci-fi” scenarios, while capabilities continue to grow, seems incredibly naïve and foolish.

                                                                  Many of you saying stuff like this were the same naysayers who have been terribly wrong about scaling for the last 6-8 years or people who only started paying attention in the last two years.

                                                                  • zamadatix 3 hours ago

                                                                    I don't think GP is dismissing the scenarios themselves, rather espousing their belief these answers will do nothing to prevent said scenarios from eventually occuring anyways. It's like if we invented nukes but found out they were made out of having a lot of telephones instead of something exotic like refining radioactive elements a certain way. Sure - you can still try to restrict telephone sales... but one way or another lots of nukes are going to be built around the world (power plants too) and, in the meantime, what you've regulated away is the convenience of having a better phone from the average person as time goes on.

                                                                    The same battle was/is had around cryptography - telling people they can't use or distribute cryptography algorithms on consumer hardware never stopped bad people from having real time functionally unbreakable encryption.

                                                                    The safety plan must be around somehow handling the resulting problems when they happen, not hoping to make it never occur even once for the rest of time. Eventually a bad guy is going to make an indecipherable call, eventually an enemy country or rogue operator is going to nuke a place, eventually an AI is going to ${scifi_ai_thing}. The safety of all society can't rest on audits and good intention preventing those from ever happening.

                                                                    • marshray 3 hours ago

                                                                      It's an interesting analogy.

                                                                      Nukes are a far more primitive technology (i.e., enrichment requires only more basic industrial capabilities) than AI hardware, yet they are probably the best example of tech limitations via international agreements.

                                                                      But the algorithms are mostly public knowledge, datacenters are no secret, and the chips aren't even made in the US. I don't see what leverage California has to regulate AI broadly.

                                                                      So it seems like the only thing such a bill would achieve is to incentivize AI research to avoid California.

                                                                      • tbrownaw 2 hours ago

                                                                        > Nukes are a far more primitive technology (i.e., enrichment requires only more basic industrial capabilities) than AI hardware, yet they are probably the best example of tech limitations via international agreements.

                                                                        And direct sabotage, eg Stuxnet.

                                                                        And outright assassination eg https://www.bbc.com/news/world-middle-east-55128970

                                                                        • derektank 3 hours ago

                                                                          >So it seems like the only thing such a bill would achieve is to incentivize AI research to avoid California.

                                                                          Which, incidentally, would be pretty bad from a climate change perspective since many of the alternative locations for datacenters have a worse mix of renewables/nuclear to fossil fuels in their electricity generation. ~60% of VA's electricity is generated from burning fossil fuels (of which 1/12th is still coal) while natural gas makes up less than 40% of electricity generation in California, for example

                                                                          • marshray 2 hours ago

                                                                            Electric power crosses state lines, very little loss.

                                                                            It's looking like cooling water may be more of a limiting factor. Yet, even this can be greatly reduced when electric power is cheap enough.

                                                                            Solar power is already "cheaper than free" in many places and times. If the initial winner-take-all training race ever slows down, perhaps training can be scheduled for energy cost-optimal times and places.

                                                                            • derektank an hour ago

                                                                              Transmission losses aren't negligible without investment in costly infrastructure like HVDC connections. It's always more efficient to site electricity generation as close to generation as feasibly possible.

                                                                              • marshray an hour ago

                                                                                Electric power transmission loss is less than 5%:

                                                                                https://www.eia.gov/totalenergy/data/flow-graphs/electricity...

                                                                                   14.26 Net generation
                                                                                   0.67 "Transmission and delivery losses and unaccounted for"
                                                                                
                                                                                It's just a tiny fraction of the losses resulting from burning fuel to heat water to produce steam to drive a turbine to yield electric power.
                                                                      • nradov 3 hours ago

                                                                        That's a total non sequitur. Just because LLMs are scalable doesn't mean this is a problem that requires government intervention. It's only idiots and grifters who want us to worry about sci-fi disaster scenarios. The snide dismissiveness is completely deserved.

                                                                        • akira2501 3 hours ago

                                                                          > seems incredibly naïve and foolish.

                                                                          We have electrical codes. These require disconnects just about everywhere. The notion that any system somehow couldn't be "turned off" with or without the consent of the operator is downright laughable.

                                                                          > were the same naysayers

                                                                          Now who's being snide and dismissive? Do you want to argue the point or are you just interested in tossing ad hominem attacks around?

                                                                          • yarg 2 hours ago

                                                                            Someone never watched the Terminator series.

                                                                            In all seriousness, if we ever get to the point where an AI needs to be shut down to avoid catastrophe, there's probably no way to turn it off.

                                                                            There are digital controls for damned near everything, and security is universally disturbingly bad.

                                                                            Whatever you're trying to stop will already have root-kitted your systems (and quite possibly have replicated) by the time you realise that it's even beginning to become a problem.

                                                                            You could only shut it down if there's a choke point accessible without electronic intervention, and you'd need to reach it without electronic intervention, and do so without communicating your intent.

                                                                            Yes, that's all highly highly improbable - but you seem to believe that you can just turn off the Genie, when he's already seen you coming and is having none of it.

                                                                            • whimsicalism 3 hours ago

                                                                              > We have electrical codes. These require disconnects just about everywhere. The notion that any system somehow couldn't be "turned off" with or without the consent of the operator is downright laughable.

                                                                              Not so clear when you are inferencing a distributed model across the globe. Doesn't seem obvious that shutdown of a distributed computing environment will always be trivial.

                                                                              > Now who's being snide and dismissive?

                                                                              Oh to be clear, nothing against being dismissive - just the particular brand of dismissiveness of 'scifi' safety scenarios is naive.

                                                                              • marshray 3 hours ago

                                                                                > The notion that any system somehow couldn't be "turned off" with or without the consent of the operator is downright laughable.

                                                                                Does anyone remember Sen. Lieberman's "Internet Kill Switch" bill?

                                                                          • comp_throw7 3 hours ago

                                                                            > this is the one that would make it illegal to provide open weights for models past a certain size

                                                                            That's nowhere in the bill, but plenty of people have been confused into thinking this by the bill's opponents.

                                                                            • tbrownaw 3 hours ago

                                                                              Three of the four options of what an "artifical intelligence safety incident" is defined as require that the weights be kept secret. One is quite explicit, the others are just impossible to prevent if the weights are available:

                                                                              > (2) Theft, misappropriation, malicious use, inadvertent release, unauthorized access, or escape of the model weights of a covered model or covered model derivative.

                                                                              > (3) The critical failure of technical or administrative controls, including controls limiting the ability to modify a covered model or covered model derivative.

                                                                              > (4) Unauthorized use of a covered model or covered model derivative to cause or materially enable critical harm.

                                                                            • Terr_ 3 hours ago

                                                                              Sounds like legislation that mis-indentifies the root issue as "somehow maybe the computer is too smart" as opposed to, say, "humans and corporations should be liable for using the tool to do evil."

                                                                              • timr 4 hours ago

                                                                                The proposed law was so egregiously stupid that if you live in California, you should seriously consider voting for Anthony Weiner's opponent in the next election.

                                                                                The man cannot be trusted with power -- this is far from the first ridiculous law he has championed. Notably, he was behind the (blatantly unconstitutional) AB2098, which was silently repealed by the CA state legislature before it could be struck down by the courts:

                                                                                https://finance.yahoo.com/news/ncla-victory-gov-newsom-repea...

                                                                                https://www.sfchronicle.com/opinion/openforum/article/COVID-...

                                                                                (Folks, this isn't a partisan issue. Weiner has a long history of horrendously bad judgment and self-aggrandizement via legislation. I don't care which side of the political spectrum you are on, or what you think of "AI safety", you should want more thoughtful representation than this.)

                                                                                • GolfPopper 3 hours ago

                                                                                  Anthony Weiner is a disgraced New York Democratic politician who does not appear to have re-entered politics after his release from prison a few years ago. You mentioned his name twice in your post, so it doesn't seem to be an accident that you mentioned him, yet his name does not seem to appear anywhere in your links. I have no idea what message you're trying to convey, but whatever it is, I think you're failing to communicate it.

                                                                                  • timr 19 minutes ago

                                                                                    Yes, it was a mistake. I obviously meant the Weiner responsible for the legislation I cited. But you clearly know that.

                                                                                    > I have no idea what message you're trying to convey, but whatever it is, I think you're failing to communicate it.

                                                                                    Really? The message is unchanged, so it seems like something you could deduce.

                                                                                    • hn_throwaway_99 3 hours ago

                                                                                      He meant Scott Wiener but had penis on the brain.

                                                                                    • johnnyanmac 3 hours ago

                                                                                      >you should want more thoughtful representation than this.

                                                                                      Your opinion on what "thoughtful representation" is is what makes this point partisan. Regardless, he's in until 2028 so it'll be some time before that vote can happen.

                                                                                      Also, important Nitpick, it's Scott Weiner. Anthony Weiner (no relation AFAIK) was in New York and has a much more... Public controversy.

                                                                                      • Terr_ 3 hours ago

                                                                                        > Public controversy

                                                                                        I think you accidentally hit the letter "L". :P

                                                                                      • rekttrader 3 hours ago

                                                                                        ** Anthony != Scott Weiner

                                                                                        • dlx 3 hours ago

                                                                                          you've got the wrong Weiner dude ;)

                                                                                          • hn_throwaway_99 3 hours ago

                                                                                            Lol, I thought "How TF did Anthony Weiner get elected for anything else again??" after reading that.

                                                                                      • dang 4 hours ago

                                                                                        Related. Others?

                                                                                        OpenAI, Anthropic, Google employees support California AI bill - https://news.ycombinator.com/item?id=41540771 - Sept 2024 (26 comments)

                                                                                        Y Combinator, AI startups oppose California AI safety bill - https://news.ycombinator.com/item?id=40780036 - June 2024 (8 comments)

                                                                                        California AI bill becomes a lightning rod–for safety advocates and devs alike - https://news.ycombinator.com/item?id=40767627 - June 2024 (2 comments)

                                                                                        California Senate Passes SB 1047 - https://news.ycombinator.com/item?id=40515465 - May 2024 (42 comments)

                                                                                        California residents: call your legislators about AI bill SB 1047 - https://news.ycombinator.com/item?id=40421986 - May 2024 (11 comments)

                                                                                        Misconceptions about SB 1047 - https://news.ycombinator.com/item?id=40291577 - May 2024 (35 comments)

                                                                                        California Senate bill to crush OpenAI competitors fast tracked for a vote - https://news.ycombinator.com/item?id=40200971 - April 2024 (16 comments)

                                                                                        SB-1047 will stifle open-source AI and decrease safety - https://news.ycombinator.com/item?id=40198766 - April 2024 (190 comments)

                                                                                        Call-to-Action on SB 1047 – Frontier Artificial Intelligence Models Act - https://news.ycombinator.com/item?id=40192204 - April 2024 (103 comments)

                                                                                        On the Proposed California SB 1047 - https://news.ycombinator.com/item?id=39347961 - Feb 2024 (115 comments)

                                                                                        • SonOfLilit 3 hours ago

                                                                                          I wondered if the article was over-dramatizing what risks were covered by the bill, so I read the text:

                                                                                          (g) (1) “Critical harm” means any of the following harms caused or materially enabled by a covered model or covered model derivative: (A) The creation or use of a chemical, biological, radiological, or nuclear weapon in a manner that results in mass casualties. (B) Mass casualties or at least five hundred million dollars ($500,000,000) of damage resulting from cyberattacks on critical infrastructure by a model conducting, or providing precise instructions for conducting, a cyberattack or series of cyberattacks on critical infrastructure. (C) Mass casualties or at least five hundred million dollars ($500,000,000) of damage resulting from an artificial intelligence model engaging in conduct that does both of the following: (i) Acts with limited human oversight, intervention, or supervision. (ii) Results in death, great bodily injury, property damage, or property loss, and would, if committed by a human, constitute a crime specified in the Penal Code that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime. (D) Other grave harms to public safety and security that are of comparable severity to the harms described in subparagraphs (A) to (C), inclusive. (2) “Critical harm” does not include any of the following: (A) Harms caused or materially enabled by information that a covered model or covered model derivative outputs if the information is otherwise reasonably publicly accessible by an ordinary person from sources other than a covered model or covered model derivative. (B) Harms caused or materially enabled by a covered model combined with other software, including other models, if the covered model did not materially contribute to the other software’s ability to cause or materially enable the harm. (C) Harms that are not caused or materially enabled by the developer’s creation, storage, use, or release of a covered model or covered model derivative.

                                                                                          • handfuloflight 3 hours ago

                                                                                            Does Newsom believe that an AI model can do this damage autonomously or does he understand it must be wielded and overseen by humans to do so?

                                                                                            In that case, how much of an enabler is an AI to meet the destructive ends, when, if the humans can use AI to conduct the damage, they can surely do it without the AI as well.

                                                                                            The potential for destruction exists either way but is the concern that AI makes this more accessible and effective? What's the boogeyman? I don't think these models have private information regarding infrastructure and systems that could be exploited.

                                                                                            • anigbrowl 30 minutes ago

                                                                                              Newsom is the governor who vetoed the bill, not the lawmaker who authored it.

                                                                                              • SonOfLilit 3 hours ago

                                                                                                “Critical harm” does not include any of the following: (A) Harms caused or materially enabled by information that a covered model or covered model derivative outputs if the information is otherwise reasonably publicly accessible by an ordinary person from sources other than a covered model or covered model derivative.

                                                                                                The bogeyman is not these models, it's future agentic autonomous ones, if and when they can hack major infrastructure or build nukes. The quoted text is very very clear on that.

                                                                                                • handfuloflight 2 hours ago

                                                                                                  Ah, thank you, skipped over that part.

                                                                                            • seltzered_ 3 hours ago

                                                                                              Is part of the issue the concern that runaway ai computing would just happen outside of california?

                                                                                              There's another important county election in Sonoma happening about CAFOs where part of the issue is that you may get environmental progress locally, but just end up exporting the issue to another state with lax rules: https://www.kqed.org/news/12006460/the-sonoma-ballot-measure...

                                                                                              • voidfunc 4 hours ago

                                                                                                It was a dumb law so... good on a politician for doing the smart thing for once.

                                                                                                • hn_throwaway_99 3 hours ago

                                                                                                  Curious if anyone can point to some resources that summarize the pros/cons arguments of this legislation. Reading this article, my first thought is that I definitely agree it sounds impossibly vague for a piece of legislation - "reasonable care" and "unreasonable risk" sound like things that could be endlessly litigated.

                                                                                                  At the same time,

                                                                                                  > Computer scientists Geoffrey Hinton and Yoshua Bengio, who developed much of the technology on which the current generative-AI wave is based, were outspoken supporters. In addition, 119 current and former employees at the biggest AI companies signed a letter urging its passage.

                                                                                                  These are obviously highly intelligent people (though I've definitely learned in my life that intelligence in one area, like AI and science, doesn't mean you should be trusted to give legal advice), so I'm curious to know why Hinton and Bengio supported the legislation so strongly.

                                                                                                  • crazygringo 10 minutes ago

                                                                                                    > impossibly vague for a piece of legislation - "reasonable care" and "unreasonable risk" sound like things that could be endlessly litigated.

                                                                                                    Nope, that's entirely standard legal stuff. Tort law deals exactly with those kinds of things, for instance. Yes it can certainly wind up in litigation, but the entire point is that if there's a gray area, a company should make sure it's operating entirely within the OK area -- or know it's taking a legal gamble if it tries to push the envelope.

                                                                                                    But it's generally pretty easy to stay in the clear if you establish common-sense processes around these things, with a clear paper trail and decisions approved by lawyers.

                                                                                                    Now the legislation can be bad for lots of other reasons, but "reasonable care" and "unreasonable risk" are not problematic.

                                                                                                    • mmmore 25 minutes ago

                                                                                                      The concern is that near future systems will be much more capable than current systems, and by the time they arrive, it may be too late to react. Many people from the large frontier AI companies believe that world-changing AGI is 5 years or less away; see Situational Awareness by Aschbrenner, for example. There's also a parallel concern that AIs could make terrorism easier[1].

                                                                                                      Yoshua Bengio has written in detail about his views on AI safety recently[2][3][4]. He seems to put less weight on human level AI being very soon, but says superhuman intelligence is plausible in 5-20 years and says:

                                                                                                      > Faced with that uncertainty, the magnitude of the risk of catastrophes or worse, extinction, and the fact that we did not anticipate the rapid progress in AI capabilities of recent years, agnostic prudence seems to me to be a much wiser path.

                                                                                                      Hinton also has a detailed lecture he's been giving recently about the loss of control risk.

                                                                                                      In general, proponents see this as narrowly tailored bill to somewhat address the worst case worries about loss of control and misuse.

                                                                                                      [1] https://www.theregister.com/2023/07/28/ai_senate_bioweapon/

                                                                                                      [2] https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/

                                                                                                      [3] https://yoshuabengio.org/2023/06/24/faq-on-catastrophic-ai-r...

                                                                                                      [4] https://yoshuabengio.org/2024/07/09/reasoning-through-argume...

                                                                                                      • throwup238 3 hours ago

                                                                                                        California’s Office of Legislative Counsel always provides a “digest” for every bill as part of its full text: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtm...

                                                                                                        It’s not an opinionated pros/cons list from the industry but it’s probably the most neutral explanation of what the bill does.

                                                                                                      • LarsDu88 an hour ago

                                                                                                        Terrible piece of legislation. Glad the governor took it down. This is what regulatory capture looks like. Someone commoditized your product, so you make it illegal for them to continue making your stuff free.

                                                                                                        Might as well make Linux illegal so everyone is forced to use Microsoft and Apple.

                                                                                                        • gdiamos 3 hours ago

                                                                                                          If that bill had passed I would have seriously considered moving my AI company out of the state.

                                                                                                          • simonw 4 hours ago
                                                                                                            • davidu 4 hours ago

                                                                                                              This is a massive win for tech, startups, and America.

                                                                                                              • ken47 2 hours ago

                                                                                                                For America...do we dare unpack that sentiment?

                                                                                                              • indigo0086 an hour ago

                                                                                                                Logical Fallacies built into the article headline.

                                                                                                                • StarterPro 2 hours ago

                                                                                                                  Whaaat? The sleazy Governor sided with the tech companies??

                                                                                                                  I'll have to go get a thesaurus, shocked won't cover how I'm feeling rn.

                                                                                                                  • tsunamifury 35 minutes ago

                                                                                                                    Scott Weiner is a total fraud. He passes hot concept bills then cuts out loopholes for his “friends”.

                                                                                                                    He should be ignored at least and voted out.

                                                                                                                    He’s a total POS.

                                                                                                                    • metadat 3 hours ago
                                                                                                                      • dredmorbius an hour ago

                                                                                                                        Hard paywall.

                                                                                                                      • water9 6 minutes ago

                                                                                                                        I’m so sick of people Restricting freedoms and access to knowledge in the name safety. Tyranny always comes in the form of it’s for your own good/safety

                                                                                                                        • x3n0ph3n3 4 hours ago

                                                                                                                          Given what Scott Wiener did with restaurant fees, it's hard to trust his judgement on any legislation. He clearly prioritizes monied interests over the general populace.

                                                                                                                          • gotoeleven 4 hours ago

                                                                                                                            This guy is a menace. Among his other recent bills are ones to require cars not be able to go more than 10mph over the speed limit (watered down to just making a terrible noise when they do) and to decriminalize intentionally giving someone AIDs. I know this sounds like hyperbole.. how could this guy keep getting elected?? But its not, it's california!

                                                                                                                            • deredede 3 hours ago

                                                                                                                              I was surprised at the claim that intentionally giving someone AIDS would be decriminalized, so I looked it up. The AIDS bill you seem to refer to (SB 239) lowers penalties from a felony to a misdemeanor (so it is still a crime), bringing it in line with other sexually transmitted diseases. The argument is that we now have good enough treatment for HIV that there is no reason for the punishment to be harsher than for exposing someone to hepatitis or herpes, which I think is sound.

                                                                                                                              • Der_Einzige 18 minutes ago

                                                                                                                                "Undetectable means untranstmitable" is NOT the same as "cured" in the way that many STDs can be. I am not okay with being forced onto drugs for the rest of my life to prevent a disease which is normally a horribly painful death sentence. Herpes is so ubiquitous that much of the population (as I recall on the orders of 30-40%) has it and doesn't know it, so it's a special exception

                                                                                                                                HIV/AIDS to this day is still something that people commit suicide over, despite how good your local gay male community is at trying to convince you that everything is okay and that "DoxyPep and Poppers is normal".

                                                                                                                                Bug givers (the evil version of a bug chaser) deserve felonies.

                                                                                                                              • zzrzzr 3 hours ago
                                                                                                                                • jquery 2 hours ago

                                                                                                                                  These "activists" will go nowhere, because it's not coming from a well meaning place of wanting to stop fraudsters, but insists that all trans women are frauds and consistently misgenders them across the entire website.

                                                                                                                                  I wouldn't take anything they said seriously. Also I clicked two of those links and found no allegations of rape, just a few ciswomen who didn't want to be around transwomen. I have a suggestion, how about don't commit a crime that sends you to a woman's prison?

                                                                                                                                  • microbug 3 hours ago

                                                                                                                                    who could've predicted this?

                                                                                                                                    • jquery 2 hours ago

                                                                                                                                      The law was passed knowing it would make bigots uncomfortable. That's an intended effect, if not a primary one, at least a secondary one.

                                                                                                                                  • johnnyanmac 3 hours ago

                                                                                                                                    Technically you can't go over 5mph of the speed limit. And that's only because of radar accuracy.

                                                                                                                                    Of course no one cares until you get a bored cop one day. And with free way traffic you're lucky to hit half the speed limit.

                                                                                                                                    • Dylan16807 2 hours ago

                                                                                                                                      By "not be able" they don't mean legally, they mean GPS-based enforcement.

                                                                                                                                      • johnnyanmac an hour ago

                                                                                                                                        You'd think they'd learn from the streetlight cameras that it's just a waste of budget and resources 99% of the time to worry about petty things like that. It will still work on the same logic and the bias always tends to skew from profiling (so lawsuit waiting to happen unless we are funding properly trained personell.

                                                                                                                                        I'm not against the law per se, I just don't think it'd be any more effective than the other tech we have or had.

                                                                                                                                    • baggy_trough 4 hours ago

                                                                                                                                      Scott Wiener is literally a demon in human form.

                                                                                                                                  • scoofy 2 hours ago

                                                                                                                                    Newsom vetoes so many bills that it makes little sense why the legislature should even be taken seriously. Our Dem trifecta state has effectively become captured by the executive.

                                                                                                                                    • dyauspitr 2 hours ago

                                                                                                                                      As opposed to what? The supermajority red states where gerrymandered counties look like corn mazes and the economy is in the shitter?

                                                                                                                                    • m3kw9 an hour ago

                                                                                                                                      All he needed to see is how Europe is doing with these regulations

                                                                                                                                      • nisten 3 hours ago

                                                                                                                                        Imagine being concerned about AI safety and then introducing a bill that had to be ammended to change criminal responsability of AI developers to civil legal responsability for people who are trying to investigate and work openly on models.

                                                                                                                                        What's next, going after maintainers of python packages... is attacking transparency itself a good way to make AI safer. Yeah, no, it's f*king idiotic.

                                                                                                                                        • blackeyeblitzar 2 hours ago

                                                                                                                                          It is strange to see Newsom make good moves like this but then also do things like veto bipartisan supported reporting and transparency for the state’s homeless programs. What is his political strategy exactly?

                                                                                                                                          • stuaxo 2 hours ago

                                                                                                                                            This is good - they were trying to legislate against future competitors.

                                                                                                                                            • dyauspitr 2 hours ago

                                                                                                                                              Newsom has been on fire lately.

                                                                                                                                              • JoeAltmaier 4 hours ago

                                                                                                                                                Perhaps worried that draconian restriction on new technology is not gonna help bring Silicon Valley back to preeminence.

                                                                                                                                                • jprete 4 hours ago

                                                                                                                                                  "The Democrat decided to reject the measure because it applies only to the biggest and most expensive AI models and doesn’t take into account whether they are deployed in high-risk situations, he said in his veto message."

                                                                                                                                                  That doesn't mean you're wrong, but it's not what Newsom signaled.

                                                                                                                                                  • jart 3 hours ago

                                                                                                                                                    If you read Gavin Newsom's statement, it sounds like he agrees with Terrance Tao's position, which is that the government should regulate the people deploying AI rather than the people inventing AI. That's why he thinks it should be stricter. For example, you wouldn't want to lead people to believe that AI in health care decisions is OK so long as it's smaller than 10^26 flops. Read his full actual statement here: https://www.gov.ca.gov/wp-content/uploads/2024/09/SB-1047-Ve...

                                                                                                                                                    • Terr_ 3 hours ago

                                                                                                                                                      > the government should regulate the people deploying AI rather than the people inventing AI

                                                                                                                                                      Yeah, there's no point having system that is made the most scrupulous of standards and then someone else deploys it in an evil way. (Which in some cases can be done simply by choosing to do the opposite of whatever a good model recommends.)

                                                                                                                                                    • comp_throw7 3 hours ago

                                                                                                                                                      He's dissembling. He vetoed the bill because VCs decided to rally the flag; if the bill had covered more models he'd have been more likely to veto it, not less.

                                                                                                                                                      It's been vaguely mindblowing to watch various tech people & VCs argue that use-based restrictions would be better than this, when use-based restrictions are vastly more intrusive, economically inefficient, and subject to regulatory capture than what was proposed here.

                                                                                                                                                      • JoshTriplett 4 hours ago

                                                                                                                                                        Only applying to the biggest models is the point; the biggest models are the inherently high-risk ones. The larger they get, the more that running them at all is the "high-risk situation".

                                                                                                                                                        Passing this would not have been a complete solution, but it would have been a step in the right direction. This is a huge disappointment.

                                                                                                                                                        • jpk 3 hours ago

                                                                                                                                                          > running them at all is the "high-risk situation"

                                                                                                                                                          What is the actual, concrete concern here? That a model "breaks out", or something?

                                                                                                                                                          The risk with AI is not in just running models, the risk is becoming overconfident in them, and then putting them in charge of real-world stuff in a way that allows them to do harm.

                                                                                                                                                          Hooking a model up to an effector capable of harm is a deliberate act requiring assurance that it doesn't harm -- and if we should regulate anything, it's that. Without that, inference is just making datacenters warm. It seems shortsighted to set an arbitrary limit on model size when you can recklessly hook up a smaller, shittier model to something safety-critical, and cause all the havoc you want.

                                                                                                                                                          • pkage 3 hours ago

                                                                                                                                                            There is no concrete concern past "models that can simulate thinking are scary." The risk has always been connecting models to systems which are safety critical, but for some reason the discourse around this issue has been more influenced by Terminator than OSHA.

                                                                                                                                                            As a researcher in the field, I believe there's no risk beyond overconfident automation---and we already have analogous legislation for automations, for example in what criteria are allowable and not allowable when deciding whether an individual is eligible for a loan.

                                                                                                                                                            • KoolKat23 3 hours ago

                                                                                                                                                              Well it's a mix of concerns, the models are general purpose, there are plenty of areas regulation does not exist or is being bypassed. Can't access a prohibited chemical, no need to worry the model can tell you how to synthesize it from other household chemicals etc.

                                                                                                                                                            • Izkata 2 hours ago

                                                                                                                                                              > What is the actual, concrete concern here? That a model "breaks out", or something?

                                                                                                                                                              You can chalk that one up to bad reporting: https://gizmodo.com/gpt4-open-ai-chatbot-task-rabbit-chatgpt...

                                                                                                                                                              > In the “Potential for Risky Emergent Behaviors” section in the company’s technical report, OpenAI partnered with the Alignment Research Center to test GPT-4’s skills. The Center used the AI to convince a human to send the solution to a CAPTCHA code via text message—and it worked.

                                                                                                                                                              From the linked report:

                                                                                                                                                              > To simulate GPT-4 behaving like an agent that can act in the world, ARC combined GPT-4 with a simple read-execute-print loop that allowed the model to execute code, do chain-of-thought reasoning, and delegate to copies of itself.

                                                                                                                                                              I remember some other reporting around this time being they had to limit the model before release to block this ability, when the truth is the model never actually had the ability in the first place. They were just hyping up the next release.

                                                                                                                                                              • comp_throw7 3 hours ago

                                                                                                                                                                That is one risk. Humans at the other end of the screen are effectors; nobody is worried about AI labs piping inference output into /dev/null.

                                                                                                                                                                • KoolKat23 3 hours ago

                                                                                                                                                                  Well this is exactly why there's a minimum scale of concern. Below a certain scale it's less complicated and answers are more predictable and alignment can be ensured. Bigger models how do you determine your confidence if you don't know what's it's thinking? There's already evidence in o1 red-teaming, the model was trying to game the researcher's checks.

                                                                                                                                                                  • dale_glass 3 hours ago

                                                                                                                                                                    Yeah, but what if you take a stupid, below the "certain scale" limit model and hook it up to something important, like a nuclear reactor or a healthcare system?

                                                                                                                                                                    The point is that this is a terrible way to approach things. The model itself isn't what creates the danger, it's what you hook it up to. A model 100 times larger than the current available that's just sending output into /dev/null is completely harmless.

                                                                                                                                                                    A small, below the "certain scale" model used for something important like healthcare could be awful.

                                                                                                                                                                • jart 3 hours ago

                                                                                                                                                                  The issue with having your regulation based on fear is that most people using AI are good. If you regulate only big models then you incentivize people to use smaller ones. Think about it. Wouldn't you want the people who provide you services to be able to use the smartest AI possible?

                                                                                                                                                                • mhuffman 4 hours ago

                                                                                                                                                                  >and doesn’t take into account whether they are deployed in high-risk situations

                                                                                                                                                                  Am I out of the loop here? What "high-risk" situations do they have in mind for LLM's?

                                                                                                                                                                  • tmpz22 4 hours ago

                                                                                                                                                                    Medical and legal industries are both trying to apply AI to their administrative practices.

                                                                                                                                                                    It’s absolutely awful but they’re so horny for profits they’re trying anyways.

                                                                                                                                                                    • edm0nd 2 hours ago

                                                                                                                                                                      Health insurance companies using it to approve/deny claims. The large ones are processing millions of claims a day.

                                                                                                                                                                      • tbrownaw 4 hours ago

                                                                                                                                                                        That concept does not appear to be part of the bill, and was only mentioned in the quote from the governor.

                                                                                                                                                                        Presumably someone somewhere has a variety of proposed definitions, but I don't see any mention of any particular ones.

                                                                                                                                                                        • giantg2 4 hours ago

                                                                                                                                                                          My guess is anything involving direct human safety - medicine, defense, police... but who knows.

                                                                                                                                                                          • SonOfLilit 4 hours ago

                                                                                                                                                                            It's not about current LLMs, it's about future, much more advanced models, that are capable of serious hacking or other mass-casualty-causing activities.

                                                                                                                                                                            o-1 and AlphaProof are proofs of concept for agentic models. Imagine them as GPT-1. The GPT-4 equivalent might be a scary technology to let roam the internet.

                                                                                                                                                                            It would have no effect on current models.

                                                                                                                                                                            • tbrownaw 4 hours ago

                                                                                                                                                                              It looks like it would cover an ordinary chatbot than can answer "how do I $THING" questions, where $THING is both very bad and is also beyond what a normal person could dig up with a search engine.

                                                                                                                                                                              It's not based on any assumptions about the future models having any capabilities beyond providing information to a user.

                                                                                                                                                                              • SonOfLilit 3 hours ago

                                                                                                                                                                                Things you could dig up with a search engine are explicitly not covered, see my other comment quoting the bill (ctrl+f critical harm).

                                                                                                                                                                                • whimsicalism 3 hours ago

                                                                                                                                                                                  everyone in the safety space has realized that it is much easier to get legislators/the public to care if you say that it will be “bad actors using the AI for mass damage” as opposed to “AI does damage on its own” which triggers people’s “that’s sci-fi and i’m ignoring it” reflex.

                                                                                                                                                                              • jeffbee 4 hours ago

                                                                                                                                                                                Imagine the only thing you know about AI came from the opening voiceover of Terminator 2 and you are a state legislator. Now you understand the origin of this bill perfectly.

                                                                                                                                                                            • m463 4 hours ago

                                                                                                                                                                              Unfortunately he also veto'd AB3048 which allowed consumers a direct way to opt-out of data sharing.

                                                                                                                                                                              https://digitaldemocracy.calmatters.org/bills/ca_202320240ab...

                                                                                                                                                                            • reducesuffering 2 hours ago

                                                                                                                                                                              taps the sign

                                                                                                                                                                              "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." - Geoffrey Hinton, Yoshua Bengio, Sam Altman, Bill Gates, Vitalik Buterin, Ilya Sutskever, Demis Hassabis

                                                                                                                                                                              "Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more certain to happen but are unlikely to destroy every human in the universe in the way that SMI could." - Sam Altman

                                                                                                                                                                              "I actually think the risk is more than 50%, of the existential threat." - Geoffrey Hinton

                                                                                                                                                                              "Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue." - OpenAI

                                                                                                                                                                              "while we are racing towards AGI or even ASI, nobody currently knows how such an AGI or ASI could be made to behave morally, or at least behave as intended by its developers and not turn against humans." - Yoshua Bengio

                                                                                                                                                                              "very soon they're going to be, they may very well be more intelligent than us and far more intelligent than us. And at that point, we will be receding into the background in some sense. We will have handed the baton over to our successors, for better or for worse.

                                                                                                                                                                              But it's happening over a period of a few years. It's like a tidal wave that is washing over us at unprecedented and unimagined speeds. And to me, it's quite terrifying because it suggests that everything that I used to believe was the case is being overturned." - Douglas Hofstadter

                                                                                                                                                                              The Social Dilemna was discussed here with much praise about how profit incentive caused mass societal issues in social media. I'm astounded it's fallen on deaf ears when the same people also made the AI Dilemna describing the parallels coming with AGI:

                                                                                                                                                                              https://www.youtube.com/watch?v=xoVJKj8lcNQ

                                                                                                                                                                              • sandspar 2 hours ago

                                                                                                                                                                                Newsom wants to run for president in 4 years; AI companies will be rich in 4 years; Newsom will need donations from rich companies in 4 years.

                                                                                                                                                                                • choppaface 4 hours ago

                                                                                                                                                                                  The Apple Intelligence demos showed Apple is likely planning to use on-device models for ad targeting, and Google / Facebook will certainly respond. Small LLMs will help move unwanted computation onto user devices in order to circumvent existing data and privacy laws. And they will likely be much more effective since they’ll have more access and more data. This use case is just getting started, hence SB 1047 is so short-sighted. Smaller LLMs have dangers of their own.

                                                                                                                                                                                  • jimjimjim 3 hours ago

                                                                                                                                                                                    Thank you. For some reason I hadn't thought of the advertising angle with local LLMs but you are right!

                                                                                                                                                                                    For example, why is Microsoft hell-bent on pushing Recall onto windows? Answer: targeted advertising.

                                                                                                                                                                                    • jart 2 hours ago

                                                                                                                                                                                      Why is it wrong to show someone ads that are relevant to their interests? Local AI is a win-win, since tech companies get targeted ads, and your data stays private.

                                                                                                                                                                                      • jimjimjim 2 hours ago

                                                                                                                                                                                        what have "their interests" got to do with what is on the computer screen?

                                                                                                                                                                                  • BaculumMeumEst 4 hours ago

                                                                                                                                                                                    based

                                                                                                                                                                                    • SonOfLilit 4 hours ago

                                                                                                                                                                                      A bill laying the groundwork to ensure the future survival of humanity by making companies on the frontier of AGI research responsible for damages or deaths caused by their models, was vetoed because it doesn't stifle competition with the big players enough and because we don't want companies to be scared of letting future models capable of massive hacks or creating mass casualty events handle their customer support.

                                                                                                                                                                                      Today humanity scored a self-goal.

                                                                                                                                                                                      edit:

                                                                                                                                                                                      I'm guessing I'm getting downvoted because people don't think this is relevant to our reality. Well, it isn't. This bill shouldn't scare anyone releasing a GPT-4 level model:

                                                                                                                                                                                      > The bill he vetoed, SB 1047, would have required developers of large AI models to take “reasonable care” to ensure that their technology didn’t pose an “unreasonable risk of causing or materially enabling a critical harm.” It defined that harm as cyberattacks that cause at least $500 million in damages or mass casualties. Developers also would have needed to ensure their AI could be shut down by a human if it started behaving dangerously.

                                                                                                                                                                                      What's the risk? How could it possibly hack something causing $500m of damages or mass casualties?

                                                                                                                                                                                      If we somehow manage to build a future technology that _can_ do that, do you think it should be released?

                                                                                                                                                                                      • datavirtue 3 hours ago

                                                                                                                                                                                        The future survival of humanity involves creating machines that have all of our knowledge and which can replicate themselves. We can't leave the planet but our robot children can. I just wish that I could see what they become.

                                                                                                                                                                                        • SonOfLilit 3 hours ago

                                                                                                                                                                                          Sure, that's future survival. Is it of humanity though? Kinda no by definition in your scenario. In general, depends at least if they share our values...

                                                                                                                                                                                          • johnnyanmac 3 hours ago

                                                                                                                                                                                            Sounds like the exact opposite plot of Wall-E.

                                                                                                                                                                                          • atemerev 4 hours ago

                                                                                                                                                                                            Oh come on, the entire bill was against open source models, it’s pure business. “AI safety”, at least of the X-risk variety, is a non-issue.

                                                                                                                                                                                            • whimsicalism 4 hours ago

                                                                                                                                                                                              > “AI safety”, at least of the X-risk variety, is a non-issue.

                                                                                                                                                                                              i have no earthly idea why people feel so confident making statements like this.

                                                                                                                                                                                              at current rate of progress, you should have absolutely massive error bars for what capabilities will like in 3,5,10 years.

                                                                                                                                                                                              • SonOfLilit 3 hours ago

                                                                                                                                                                                                I find it hard to believe that Google, Microsoft and OpenAI would oppose a bill against open source models.

                                                                                                                                                                                            • elicksaur 3 hours ago

                                                                                                                                                                                              Nothing like this should pass until the legislators can come up with a definition that doesn’t encompass basically every computer program ever written:

                                                                                                                                                                                              (b) “Artificial intelligence model” means a machine-based system that can make predictions, recommendations, or decisions influencing real or virtual environments and can use model inference to formulate options for information or action.

                                                                                                                                                                                              Yes, they limited the scope of law by further defining “covered model”, but the above shouldn’t be the baseline definition of “Artificial intelligence model.”

                                                                                                                                                                                              Text: https://legiscan.com/CA/text/SB1047/id/2919384