U.S. tech bigwigs chatted with lawmakers and civil society groups in Washington this week about the possibilities for regulating artificial intelligence (AI). Elon Musk, speaking for the many, for once, made a wistful plea for legislative clarity.
Meanwhile, the EU and China are forging ahead with legislation that will provide it in their respective jurisdictions:
• The EU’s Artificial Intelligence Act, the world’s first comprehensive legal framework for AI regulation, won approval from the European Parliament in June, clearing the way for EU officials and the 27 EU member states to negotiate the detailed text.
• In August, the Chinese Academy of Social Sciences (CASS) released a model AI law.
A research institute publishing draft legislation in a highly technical area is not unprecedented in China. The Personal Information Protection Law (PIPL) of 2021 was similarly preceded by scholarly efforts that contained most of the fundamental principles in the law that now determines the country’s personal data protection framework.
Many of the CASS document’s core elements, too, will be included in the final AI law. This will take about a year to be finalized after a draft has been submitted to the National People’s Congress, expected before year-end.
The document provides a framework for developing an AI-enabled domestic economy.
It thus gives considerable attention to outlining the state’s role in firmly guiding the development and application of AI, including on matters such as constructing computing infrastructure, building up data markets, training experts and transforming AI research and development (R&D) into industrial applications.
It also lays down a few principles that will sound familiar to Western policymakers, most notably the principle of human-centric AI—albeit within the scope of “socialist core values.” These principles include openness, transparency, fairness, equality and non-discrimination.
The draft also proposes guardrails for AI development through a “negative list” mechanism that would impose licensing obligations for the R&D and public provision of specific AI applications. Free development of listed applications would be barred or subject to strict regulatory permission.
Applicants would have to be Chinese companies with their chief “responsible personnel” Chinese citizens to obtain a license. These requirements would give authorities greater power of regulation and—where necessary—of arrest without risking diplomatic consequences.
Moreover, developers would be obliged to:
• Maintain copious documentation, for example, on the training data used by AI models, developments in machine learning and usage records.
• Establish dedicated systems for quality and content control, data security and incident management.
The draft puts responsibility for ensuring the security and safety of AI applications on providers, who will bear the responsibility to:
• Notify users about security risks and incidents.
• Specify that an online service uses AI applications.
• Explain the function of AI algorithms upon request, mitigate biases and discriminations present in the model.
• Regularly conduct ethics reviews.
All AI providers must also register their AI applications with regulatory authorities and conduct regular audits not dissimilar to those imposed under the PIPL.
This transfer of responsibility from the government to the developers not only takes cognizance of the limited enforcement capabilities of state authorities in this highly technical field but also aims to incentivize developers to comply with regulations and maintain mandated documentation.
To oversee this, CASS proposes the establishment of a national AI regulator separate from:
• The Cyberspace Administration of China, currently the primary regulator for AI-generated content and AI-based online services.
• The Ministry of Industry and Information Technology, which is closely involved with AI industrial policy.
This body would echo the National Data Bureau, established earlier this year under the aegis of the top strategic economic planning agency, the National Development and Reform Commission. It would draft specific regulations for AI, coordinate risk monitoring and warning, oversee emergency response, and take charge of enforcement.
The draft also provides for regulatory sandboxes and experiments, recognizing a need to promote innovation and prevent regulation from crushing infant applications.
Of note for foreign investors and companies is that the draft contains tripwire clauses similar to the Data Security Law and the PIPL, which enable reciprocal measures in case of foreign sanctions.
This provision is unlikely to affect Western vendors of large language models such as ChatGPT since those are barred in China. Instead, it would affect the broad ecosystem of AI technology, including semiconductors used in AI systems or sanctions on cross-border data exchanges between businesses.
Punitive provisions, too, echo the PIPL, as the draft introduces a penalty worth up to 4% of a non-compliant developer’s annual turnover.
The proposal seems mature: many elements echo talking points in the official discourse, borrow mechanisms and practices from legislation already in force, such as synthetic and AI-generated content, data security and technical standards, and consolidate provisions already present in ministry-level regulations.
It will further push China’s AI development along an increasingly clear path:
• A focus on industrial applications within strict constraints where it relates to political sensitivities.
• A focus on socio-economic externalities and Beijing’s strategic policy goals that were a significant part of the regulatory offensive against technology firms in 2021-22.
As is usual with Chinese legislation, once it is law, subsequent enabling regulations will clarify which applications are on the negative list, resolve overlaps with existing data legislation, and outline the proposed AI regulator’s precise powers and remit.