It is the first law of its kind in the United States, and it mandates that developers of the most advanced AI models—those that surpass current benchmarks and have the potential to greatly affect society—disclose the extent to which they have adhered to national and international standards and best practices.
Notification of serious safety issues resulting from AI models is required in the event of large-scale cyberattacks, fatalities of 50 or more persons, or significant financial losses. Also there are safeguards for those who blow the whistle.
"Disclosures are the main focus. According to Annika Schoene, a research scientist at Northeastern University's Institute for Experiential AI, "there is no enforceability even if the frameworks disclosed are problematic" since government and public understanding of frontier AI is limited.
Since many of the top AI businesses are based in California, laws passed there may have far-reaching consequences for AI policy and its users across the world.
State Senator Scott Wiener first proposed a version of the measure last year that would have mandated kill switches for potentially problematic devices. The use of independent assessors was also required.
Concerns that the bill's stringent regulation of a new industry would limit innovation led to its detractors. After Governor Gavin Newsom vetoed the measure, Wiener collaborated with a scientific committee to revise it until it was satisfactory, and on September 29 it became law.
Al Jazeera spoke with Hamid El Ekbia, head of Syracuse University's Autonomous Systems Policy Institute, who stated that "some accountability was lost" in the recently enacted version of the bill.