Generative AI and Advances in the Chinese Regulatory Approach

*This is an AI-powered machine translation of the original text in Portuguese

The Communist Party of China exercises strict control over the internet in the country, creating a digital environment that serves its interests and a robust domestic industry. With the advancement of new generative artificial intelligence (GAI) technologies with disruptive potential, new regulatory responses are emerging to maintain state control in digital environments.

The Cyberspace Administration of China (CAC), the internet regulatory authority, published the Measures for the Management of Generative Artificial Intelligence Services[1] (Measures for GAI) on April 11th. The publication occurred just a few months after the implementation of the Provisions on the Administration of Deep Synthesis Internet Information Services[2] (Provisions on Deep Synthesis), in part as a response to the advancement and rapid adoption of GIAs like OpenAI's ChatGPT and subsequent initiatives by Chinese companies such as Ernie (Baidu), Tongyi Qianwen (Alibaba), and SenseChat (Sensetime) to keep up with their Western competitors.

The CAC plays a central role in the ever-evolving legal and regulatory framework, given the dynamic nature of GAI technological developments. The agency has had authority over digital content since 2014 and is responsible for cybersecurity, internet content regulation, and overseeing companies in the sector. Its responsibilities include directing, coordinating, and supervising online content management, regulatory activities, licensing, and administrative sanctions[3]. Monitoring the actions of the CAC is essential for understanding the direction of GAI regulation.

In China, the regulation of generative artificial intelligence follows a vertical approach[4]. Different regulatory texts focus on specific applications of GIAs, whereas in Europe, the IA Act is horizontal, covering a broad spectrum of applications and various aspects of the technology.

The Measures for GAI were preceded by other rules focused on specific applications, such as the Internet Information Service Algorithmic Recommendation Management Provisions[5] (Provisions on Algorithmic Recommendation) (in effect since March 2022), which were created to standardize algorithmic recommendation activities. For example, it prohibits algorithmic generation of fake news and prevents algorithmic service providers from engaging in monopolistic practices or anti-competitive behavior.

Preceding the Measures for GAI was the Provisions on Deep Synthesis (in effect since January 2023), designed to regulate Deep Synthesis activities, enabling the creation of Deepfakes (technology that allows the creation of "fake" but realistic audios, photos, and videos) and establishing the foundation for GAI regulation in the country, with specific provisions on chatbots and a focus on the production of synthetic audiovisual media.

Moving on to the new legislation, the stated central objective of the Measures for GAI is to promote the healthy development and standardized application of Generative AIs in China. This proposal represents another recent step in mitigating the risks posed by GAI technologies[6] and includes measures that are of interest and could serve as a benchmark for future regulatory initiatives.

One of the provisions likely to generate interest and debate beyond China's borders is the obligations imposed on providers as a prerequisite for offering services to the public. According to the CAC's rule, service providers must submit a security evaluation request to the CAC and provide certain information about the algorithm to the authority before making their GAI tools available to the Chinese public. Specific requirements and procedures for this security evaluation are not detailed in the Measures for GAI, but they could involve an assessment of potential risks associated with the use of the AI product or service, including possible vulnerabilities, privacy impact, and overall compliance with cybersecurity laws and regulations, in addition to potential implications for state social control.

Another notable topic is the intersection with personal data protection in China, based on the recent Personal Information Protection Law (PIPL). The Measures for GAI go beyond what is stipulated in the PIPL, providing that when personal data is processed by AI, the processing agents are responsible as personal information handlers, with a certain equivalence to the "controller" concept known to the Brazilian public. Under the new Chinese rule, consent (art.7(3)) is required for data processing or other procedures in accordance with existing laws or administrative regulations (art.5). It is prohibited, under Article 11, for service providers to profile users based on information shared with GAI systems.

The stricter Chinese rules regarding generative artificial intelligence developers have the potential to put Chinese actors out of sync with their international competitors. Companies like OpenAI develop their programs based on large amounts of data, often obtained through web scraping techniques. The provisions of the Measures for GAI regarding the databases that underpin the creation and operation of GIAs include requirements such as the removal of all content violating intellectual property rights. Additionally, service providers must ensure the truthfulness, objectivity, accuracy, and diversity of the data, which may necessitate the creation of additional tools and filters, potentially incurring additional costs.

While not an exhaustive analysis of regulation, the provisions of the Measures for GAI signal the Chinese government's effort to reconcile technological innovation with state and social interests, despite the risk of limiting the international competitiveness of the local generative artificial intelligence industry. In the international regulatory context of an expanding technology, the existence of legislative initiatives based on different premises seems beneficial and allows for exploring paths for regulating technologies whose disruptive potential is still immeasurable. Due to its vertical regulatory approach, it is reasonable to expect that legislation in China will continue to evolve rapidly as the technology and its applications develop.

[1] HUANG, Seaton. et al. Translation: Measures for the Management of Generative Artificial Intelligence Services (Draft for Comment) – April 2023: Novel rules about training data and accuracy of generated media circulated for comment. DIGICHINA, Stanford University. Apr. 2023. Available at: https://digichina.stanford.edu/work/translation-measures-for-the-management-of-generative-artificial-intelligence-services-draft-for-comment-april-2023/. Accessed on 05/21/2023.

[2] CYBERSPACE ADMINISTRATION OF CHINA. Provisions on the Administration of Deep Synthesis Internet Information Services. Dec. 2022. Available at: http://www.cac.gov.cn/2022-12/11/c_1672221949354811.htm. Accessed on 05/20/2023. See translation at: https://www.chinalawtranslate.com/en/deep-synthesis/. Accessed on 05/20/2023.

[3] HORSLEY, Jamie P. Behind the Facade of China’s Cyber Super-Regulator: What we think we know—and what we don’t—about the Cyberspace Administration of China. DIGICHINA, Stanford University. Aug. 2022. Available at: https://digichina.stanford.edu/work/behind-the-facade-of-chinas-cyber-super-regulator/. Accessed on 05/19/2023.

[4] O’SHAUGNESSY, Matt. SHEEHAN, Matt. Lessons from the World’s Two Experiments in AI Governance. Feb. 2023. Available at: https://carnegieendowment.org/2023/02/14/lessons-from-world-s-two-experiments-in-ai-governance-pub-89035. Accessed on 05/21/2023.

[5] CREEMERS, Rogier. Et al. Translation: Internet Information Service Algorithmic Recommendation Management Provisions– Effective March 1, 2022. Jan. 2022. Available at: https://digichina.stanford.edu/work/translation-internet-information-service-algorithmic-recommendation-management-provisions-effective-march-1-2022/. Accessed on 05/18/2023.

[6] TRIOLO, Paul. ChatGPT and China: How to think about Large Language Models and the generative AI race. The China Project, Apr. 2023. Available at: https://thechinaproject.com/2023/04/12/chatgpt-and-china-how-to-think-about-large-language-models-and-the-generative-ai-race/. Accessed on 05/08/2023.

[7] ARRIENS. Jaap. In China, the ‘Great Firewall’ Is Changing a Generation. Politico, Jan. 2020. Available at: https://www.politico.com/news/magazine/2020/09/01/china-great-firewall-generation-405385. Accessed on 05/15/2023.

[8] Cf. https://institute.aljazeera.net/en/ajr/article/2070.

[9] The responsibility for the output of GAI platforms and other consequences has been the subject of recent debates in the United States. LIPTAK, Adam. Supreme Court Won’t Hold Tech Companies Liable for User Posts. The New York Times, May 2023. Available at: https://www.nytimes.com/2023/05/18/us/politics/supreme-court-google-twitter-230.html. Accessed on 05/21/2023.

 

*Coauthored with José Humberto Fazano Filho e Maria Gabriela Grings. Originally published in JOTA.

By using our website, you agree to our Privacy Policy and our cookies usage.