California Proposes New Bills to Regulate Generative AI Technology

June 20, 2024

As the development and deployment of artificial intelligence (AI), particularly generative AI systems, accelerate, the California Legislature has introduced a series of bills aimed at ensuring the transparent, ethical, and responsible usage of these technologies. These legislative efforts seek to address potential ethical concerns, data privacy issues, and the broader implications of AI, positioning California as a proactive region in the governance of AI. The overarching objective is to mitigate any adverse impacts while fostering innovation, reflecting an overarching trend that sets a precedent for AI regulation across the nation.

The rapid advancements in AI technologies have prompted California lawmakers to act decisively, with a focus on balancing innovative progress with ethical considerations. Legislative measures highlight the importance of transparency in AI-generated content, ensuring that the origins and nature of AI outputs can be easily identified. Additionally, the bills address data privacy concerns, placing strict limitations on the sources and uses of data for training AI models. The proactive stance also extends to anticipating future advancements in AI capabilities, setting the groundwork for managing more sophisticated AI systems. By addressing these multifaceted aspects, California demonstrates a holistic approach to AI governance that other jurisdictions may look towards in developing their own regulations.

Ensuring Transparency in AI-Generated Content

A cornerstone of the proposed legislation is the emphasis on transparency in AI-generated content, which aims to clear ambiguities about the origins and authenticity of AI outputs. Senate Bill 942 (SB 942) mandates that generative AI systems with an average of 1 million monthly users develop an “AI detection tool” to verify whether the content is AI-generated. Moreover, the legislation requires AI-generated content to carry visible, hard-to-remove disclosures indicating its synthetic nature, with penalties for noncompliance set at $5,000 daily, enforceable by the Attorney General. This pushes companies to ensure that users are not misled about the nature of the content they are interacting with, fostering trust in AI technologies.

Similarly, Assembly Bill 3211 (AB 3211) builds on these transparency measures by requiring generative AI systems to embed watermarks in their outputs by February 1, 2025. These watermarks must be paired with decoders to verify authenticity and apply not only to generative AI content but also to conversational AI systems, such as chatbots. Chatbots will need to clearly disclose their synthetic nature, further reinforcing transparency in user interactions. Additionally, the bill mandates that system vulnerabilities be reported to the Department of Technology, which holds enforcement power and can impose penalties up to $1 million or 5% of the violator’s annual global revenue. This element of the bill underscores the importance of maintaining robust security and ethical standards in the deployment of AI technologies.

Data Source and Usage Limitations

Another critical area of focus for the California Legislature includes regulating the data sources used to train AI models, emphasizing ethical data practices and user privacy. Assembly Bill 2013 (AB 2013), effective January 1, 2026, requires AI model developers to disclose comprehensive information about the datasets used to train their models on their websites. This includes details such as the source, ownership, the number of samples, and whether the data includes copyrighted material or personal information as defined by the California Consumer Privacy Act (CCPA). By enforcing transparency in data usage, AB 2013 ensures that AI model training adheres to ethical standards and respects user privacy, with exemptions made for AI models developed solely for security and integrity purposes.

Assembly Bill 2877 (AB 2877) specifically aims to protect minors by prohibiting the use of their personal information for AI training without affirmative consent. For individuals under the age of 16, parental consent is required for those under 13. Furthermore, even when consent is provided, the bill mandates that the data be de-identified and aggregated before its usage for AI training. This additional privacy protection layer ensures that minors’ information is not misused and that their data enjoys a higher privacy standard. The objective of these legislative efforts is to foster an AI ecosystem where data practices are transparent, ethical, and geared towards protecting individual privacy, especially for vulnerable populations like minors.

Advanced AI Regulations

The California Legislature also addresses the regulation of future advanced AI systems through Senate Bill 1047 (SB 1047). This bill introduces the Frontier Model Division to govern AI models capable of extremely high computational operations, specifically 10^26 integer operations per second (IOPS) or floating-point operations per second (FLOPS). Although current technologies are not yet affected, the bill demonstrates a forward-thinking approach to managing potentially powerful AI systems that may emerge in the future. By preemptively setting regulatory frameworks for such advanced AI capabilities, SB 1047 signals a precautionary approach, ensuring that appropriate measures are in place to handle the complexities and risks associated with exceedingly powerful AI technologies.

Furthermore, SB 1047 requires operators of computer clusters performing 10^20 IOPS or FLOPS to establish strict policies regulating customer usage. These policies are intended to preemptively address issues that may arise from the deployment of advanced AI systems, ensuring that such powerful technologies are developed and deployed responsibly. This legislative foresight highlights California’s commitment to not only addressing current AI technologies but also preparing for future advancements, thereby providing a structured pathway for the responsible evolution of AI capabilities.

Regulations for AI Deployment

In the realm of AI deployment, Assembly Bill 2930 (AB 2930) is particularly comprehensive, establishing compliance requirements for entities using AI in making significant decisions. These decisions include areas such as hiring, educational assessments, financial services, and healthcare. Entities are mandated to perform impact assessments to analyze their data collection methods, assess potential adverse impacts on protected classes, and propose mitigation measures for discrimination risks. The goal is to ensure that AI deployment does not perpetuate biases or result in unfair treatment of individuals based on protected characteristics.

AB 2930 further mandates that individuals be informed prior to the use of AI tools in making consequential decisions, promoting transparency and respecting individual autonomy. Moreover, the bill requires entities to provide alternatives or accommodations for individuals who prefer not to be subjected to AI-driven decision-making processes. By reinforcing the importance of human oversight and offering alternatives, the legislation ensures that AI technologies serve as assistive tools rather than replacing human judgment. These comprehensive measures highlight the critical need for transparency and accountability in the deployment of AI systems, safeguarding against potential harms and promoting equitable outcomes.

AI in Healthcare

The intersection of AI and healthcare receives specific attention through Assembly Bill 3030 (AB 3030). This bill mandates that medical offices utilizing AI for patient communications disclose this information to patients, fostering transparency in healthcare interactions. Additionally, it requires medical offices to provide instructions on how patients can communicate with a human healthcare provider. This provision ensures that the human element remains preserved in medical interactions, maintaining patient trust and promoting clear communication channels between patients and healthcare providers.

AB 3030 underscores the importance of transparency and human oversight in healthcare contexts where AI is employed. By mandating disclosures and offering guidelines for human interaction, the bill aims to mitigate any potential alienation or confusion patients might feel when interacting with AI technologies. This focus on maintaining the human element within healthcare interactions reflects a balanced approach, ensuring that AI serves as an augmentative tool without undermining the critical human aspects of patient care and communication.

AI in Social Media and Political Advertising

As the pace of artificial intelligence (AI) development, particularly generative AI, quickens, the California Legislature has introduced multiple bills to ensure these technologies are used transparently, ethically, and responsibly. These legislative efforts aim to address ethical concerns, data privacy issues, and broader implications of AI, making California a proactive leader in AI governance. The goal is to minimize adverse impacts while encouraging innovation, setting a national precedent for AI regulation.

California’s quick advancements in AI have compelled lawmakers to strike a balance between innovation and ethics. The introduced laws emphasize the necessity of transparency in AI-generated content, ensuring users can easily identify AI origins and outputs. Furthermore, these bills tackle data privacy by imposing strict regulations on the data sources and applications used to train AI models. This proactive stance also prepares for future AI advancements, laying the groundwork for managing more sophisticated systems. Through these comprehensive measures, California sets a holistic example in AI governance that other regions may follow when developing their own regulatory frameworks.

Subscribe to our weekly news digest!

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for subscribing.
We'll be sending you our best soon.
Something went wrong, please try again later