The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.
I don't see how you could realistically provide that guarantee.
I mean, you could create some kind of best-effort thing to make it more difficult, maybe.
If we knew how to make AI -- and this is going past just LLMs and stuff -- avoid doing hazardous things, we'd have solved the Friendly AI problem. Like, that's a good idea to work towards, maybe. But point is, we're not there.
Like, I'd be willing to see the state fund research on that problem, maybe. But I don't see how just mandating that models be conformant to that is going to be implementable.
The criticism from large AI companies to this bill sounds a lot like the pushbacks from auto manufacturers from adding safety features like seatbelts, airbags, and crumple zones. Just because someone else used a model for nefarious purposes doesn’t absolve the model creator from their responsibility to minimize that potential. We already do this for a lot of other industries like cars, guns, and tobacco - minimize the potential of harm despite individual actions causing the harm and not the company directly.
I have been following Andrew Ng for a long time and I admire his technical expertise. But his political philosophy around ML and AI has always focused on self regulation, which we have seen fail in countless industries.
The bill specifically mentions that creators of open source models that have been altered and fine tuned will not be held liable for damages from the altered models. It also only applies to models that cost more than $100M to train. So if you have that much money for training models, it’s very reasonable to expect that you spend some portion of it to ensure that the models do not cause very large damages to society.
So companies hosting their own models, like openAI and Anthropic, should definitely be responsible for adding safety guardrails around the use of their models for nefarious purposes - at least those causing loss of life. The bill mentions that it would only apply to very large damages (such as, exceeding $500M), so one person finding out a loophole isn’t going to trigger the bill. But if the companies fail to close these loopholes despite millions of people (or a few people millions of times) exploiting them, then that’s definitely on the company.
As a developer of AI models and applications, I support the bill and I’m glad to see lawmakers willing to get ahead of technology instead of waiting for something bad to happen and then trying to catch up like for social media.
The bill, passed by the state’s Senate last month and set for a vote from its general assembly in August, requires AI groups in California to guarantee to a newly created state body that they will not develop models with “a hazardous capability,” such as creating biological or nuclear weapons or aiding cyber security attacks.
I'll get right back to my AI-powered nuclear weapons program after I finish adding glue to my AI-developed pizza sauce.
The only thing that I fear more than big tech is a bunch of old people in congress trying to regulate technology who probably only know of AI from watching terminator.
Also, fun Scott Wiener fact. He was behind a big push to decriminalization knowingly spreading STDs even if you lied to your partner about having one.
While the proposed bill's goals are great, I am not so sure about how it would be tested and enforced.
It's cool that on current LLMs, the LLM can generate a 'no' response like those clips where people ask if the LLM has access to their location -- but then promptly gives advices to a closest restaurant as soon as the topic of location isn't on the spotlight.
There's also the part about trying to contain 'AI' to follow once it has ingested a lot of training data. Even goog doesn't know how to curb it once they are done with initial training.
I am all up for the bill. It's a good precedent but a more defined and enforce-able one would be great as well.
Small problem though: researchers have already found ways to circumvent LLM off-limit queries. I am not sure how you can prevent someone from asking the “wrong” question. It makes more sense for security practices to be hardened and made more robust
Here's the thing: How would you react, if this bill required all texts that could help someone "hack" to be removed from libraries? Outrageous, right? What if we only removed cybersecurity texts from libraries if they were written with the help of AI? Does it now become ok?
What if the bill "just" sought to prevent such texts from being written? Still outrageous? Well, that is what this bill is trying to do.
Everyone remember this the next time a gun store or manufacturer gets shielded from a class action led by shooting victims and their parents.
Remember that a fucking autocorrect program needed to be regulated so it couldn't spit out instructions for a bomb, that probably wouldn't work, and yet a company selling well more firepower than anyone would ever need for hunting or home defense was not at fault.
I agree, LLMs should not be telling angry teenagers and insane righrwungers how to blow up a building. That is a bad thing and should be avoided. What I am pointing out is the very real situation we are in right now a much more deadly threat exists. And that the various levels of government have bent over backwards to protect the people enabling it to be untouchable.
If you can allow a LLM company to be sued for serving up public information you should definitely be able to sue a corporation that built a gun whose only legit purpose is commiting a war crime level attack with.