International Advances in Artificial Intelligence Governance: Analysis of the Scope of the G7 Guiding Principles and Code of Conduct and President Biden's Executive Order

*This is an AI-powered machine translation of the original text in Portuguese

** Image resource obtained from  Freepik.com

On 30/10/2023, three important documents on artificial intelligence (AI) governance were published: within the G7 framework, the Guiding Principles and Code of Conduct for organizations developing advanced AI systems, and in the United States, President Biden's Executive Order on the safe and reliable development and use of AI. Since then, there has been talk of the effective beginning of AI regulation after years of legislative discussions in various countries, especially within the European Union with the proposed AI Act in 2021 and in Brazil with the first proposed bill on the topic in 2020 (PL 21/2020), now replaced by PL 2338/2023, currently under consideration in the Senate.

The G7 Guiding Principles and Code of Conduct are part of the Hiroshima Process, where G7 leaders discussed initiatives to promote the security and reliability of AI systems worldwide. Both documents provide voluntary guidance to organizations (from companies to civil society and government entities) that "develop and use the most advanced AI systems, including advanced foundational models and generative AI systems." In addition to being instruments of soft law by nature, the Principles and Code of Conduct expressly state that they do not replace state regulation on AI, stating that different jurisdictions may adopt their approaches to implement the guidelines and that governments can develop stronger regulatory and governance measures in parallel.

In this regard, the European Commission welcomed the approval of the Guiding Principles and Code of Conduct, stating that these voluntary guidelines would "complement, at the international level, the legally binding rules that EU lawmakers are currently finalizing under the AI Act." Harmonization with the AI Act seems to have been considered in the drafting of the G7 documents, which guide organizations to follow them in line with a risk-based approach, similar to the approach taken by the AI Act and PL 2338/2023 in Brazil, establishing different obligations according to the risk that the AI system may present.

The Guiding Principles are replicated and detailed in the Code of Conduct, and both contain introductory notes establishing, among other points, the need for private entities to refrain from developing and using AIs of "unacceptable" risk (applications that compromise democratic values, are particularly harmful to individuals or communities, facilitate terrorism or criminal use, or pose substantial risks to security and human rights) and to observe international instruments for the protection of human rights, such as the UN Guiding Principles on Business and Human Rights. G7 leaders also commit to developing monitoring and accountability mechanisms for companies. The principles are then presented, containing the following 11 recommendations:

  1. Take appropriate measures throughout the development of advanced AI systems, including before and during their deployment and market placement, to identify, assess, and mitigate risks throughout the AI lifecycle.
  2. Identify and mitigate vulnerabilities and, when appropriate, incidents and patterns of misuse after deployment, including market placement.
  3. Publicly report on the capabilities, limitations, and areas of appropriate and inappropriate use of advanced AI systems to support sufficient transparency, thereby contributing to increased accountability.
  4. Seek responsible sharing of information and incident reports among organizations developing advanced AI systems, including with industry, governments, civil society, and academia.
  5. Develop, implement, and disclose AI governance and risk management policies based on a risk-based approach—including privacy policies and mitigation measures.
  6. Invest in and implement robust security controls, including physical security, cybersecurity, and protections against internal threats throughout the AI lifecycle.
  7. Develop and implement reliable authentication and provenance mechanisms for content, whenever technically feasible, such as watermarks or other techniques to allow users to identify content generated by AI.
  8. Prioritize research to mitigate social and security risks and prioritize investment in effective mitigation measures.
  9. Prioritize the development of advanced AI systems to address the world's greatest challenges, notably, but not limited to, the climate crisis, global health, and education.
  10. Promote the development and, when appropriate, adoption of international technical standards.
  11. Implement appropriate measures in data input, such as data quality and protections for personal data and intellectual property.

Before the approval of the Hiroshima Process texts, the European Commission opened a public consultation on the Guiding Principles. One of the questions in the consultation concerned the monitoring mechanisms to be implemented: whether there is a need for monitoring and, if so, whether it should be carried out by a reliable international organization, national organizations, or through self-assessment. Maranhão & Menezes contributed, arguing that the dynamic nature of AI solutions suggests a preference for organizations to adopt a regulated self-regulation model, ensuring that assessments are carried out in line with rapid technological changes and specific applications in various sectors, while national organizations would dictate general guidelines. In the context of adopting a self-assessment model, national organizations could still establish procedures and minimum assessment criteria, endorsing the results to ensure a common minimum reference and playing the role of conducting audits in high-risk situations.

The public consultation also allowed for suggestions for new principles or changes to the principles proposed by the G7. Among the considerations made by Maranhão & Menezes on this point, the importance of a new principle to ensure that AI systems improve competitiveness in markets was highlighted. As discussed in a recent article by Juliano Maranhão and Josie Menezes on the interface between AI and competition law, the transformations brought about by AI models can significantly alter competitive dynamics and the relationship between market agents. On the one hand, technological advances related to AI ensure innovation and the emergence of disruptive solutions. However, there are also considerable competitive risks arising from the transformation of the supply and competition structure in AI-related markets. For example, pioneers in these solutions may enjoy consolidated positions that ensure the maintenance of their market position at the expense of other competitors and other possible new technological solutions. There are also risks associated with tacit collusion resulting from the development of predictive AI solutions, with or without access to competitively sensitive information. Hence the need to establish a principle that favors competition in the development and application of AI systems.

Although not included in the G7 texts, this point was addressed in the Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI issued by President Biden. Section 5 of the Executive Order deals with the promotion of innovation and competition, and item 5.3(a) establishes that agencies regulating aspects related to AI should promote competition in the AI and related technologies market, as well as in other markets. Such actions include addressing risks arising from concentrated control of key inputs, taking steps to stop unlawful collusion, preventing dominant firms from disadvantaging competitors, and working to provide new opportunities for small businesses and entrepreneurs. In particular, the Federal Trade Commission is encouraged to consider whether to exercise its existing authorities to ensure fair competition in the AI marketplace and to protect consumers and workers from harms enabled by the use of AI.

In addition to the section on promoting competition and innovation, the Executive Order addresses seven other themes: security standards for AI (Section 4); worker protection (Section 6); promotion of equity and civil rights (Section 7); protection of consumers, patients, passengers, and students (Section 8); privacy protection (Section 9); use of AI by the federal government (Section 10); and strengthening U.S. leadership (Section 11). The content of the Executive Order complements that of the G7 Principles, and both have common points, such as encouraging research in vital areas like health and climate change, developing standards and technical norms in partnership with international allies and standardization organizations to ensure that technology is safe, reliable, and interoperable, and adopting standards and techniques for content authentication and provenance tracking, labeling, and detection of synthetic content.

Most of the directives of the Executive Order are directed at various government agencies, requesting the formulation of guidance documents or determining the creation of new public policies, such as incentive programs for research and innovation in AI. Among the documents to be developed by the agencies are guides, standards, and best practices on AI security by the National Institute of Standards and Technology (NIST), principles and best practices to be developed by the Secretary of Labor to prevent employers from underpaying workers, unfairly assessing job applications, or interfering with workers' organizing ability, or even best practices recommended in a report by the Attorney General on the use of AI in the criminal justice system.

However, there is an important point in the Executive Order that has been characterized in the media as imposing an obligation on AI development companies to "open their data" to the U.S. government. In reality, however, the rule has limited scope: companies developing foundational models that pose a serious risk to national security, national economic security, or national public health and safety. In these cases, companies developing the systems must notify the federal government when training the AI model and share the results of all red team safety tests and other critical information with the government. This requirement is based on the Defense Production Act, a law created in 1950 to organize and facilitate the production of goods and services necessary for national security, which remains in effect with modifications.

Therefore, it is observed that the Executive Order is not yet a general regulation of AI as it has been portrayed – also because such a general norm could only be issued with the approval of the U.S. Congress. The Executive Order directs government actions to address the challenges posed by AI and to strengthen the U.S. position in the international AI race. Specifically, there is also a requirement for information from private agents, but based on prerogatives guaranteed by law in matters of national defense.

This consideration also applies to the documents published by the G7, emphasizing that they do not replace any state regulations regarding AI. The importance of these and other international initiatives for AI governance, such as the AI Safety Summit held in the UK, which brought together governments, experts, and companies from various countries earlier this month (November 2023) and resulted in the signing of a joint declaration on AI safety tests and the Bletchley Declaration on AI security, is not denied. These initiatives play an important role in establishing principles with broad international acceptance, and, in the case of the U.S. Executive Order, developing guidelines, standards, and public policies. However, it is worth noting that they play different roles than binding regulations on the AI industry that may eventually be adopted by the states.

[1] "Different jurisdictions may take their own unique approaches to implementing these actions in different ways. We call on organizations, in consultation with other relevant stakeholders, to follow these actions, in line with a risk-based approach, while governments develop more enduring and/or detailed governance and regulatory approaches." Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems, p. 1.

"Different jurisdictions may take their own unique approaches to implementing these guiding principles in different ways. We call on organizations, in consultation with other relevant stakeholders, to follow these actions, in line with a risk-based approach, while governments develop more enduring and/or detailed governance and regulatory approaches." Hiroshima Process International Guiding Principles for Organizations Developing Advanced AI systems, p. 1.

Available at: https://www.mofa.go.jp/ecm/ec/page5e_000076.html. Accessed on 09/11/2023.

[2] European Commission. Commission welcomes G7 leaders' agreement on Guiding Principles and a Code of Conduct on Artificial Intelligence. 30/10/2023. Available at: https://digital-strategy.ec.europa.eu/en/news/commission-welcomes-g7-leaders-agreement-guiding-principles-and-code-conduct-artificial. Accessed on 09/11/2023.

[3] ALMADA, Marcos; MARANHÃO, Juliano; MENEZES, Josie. Artificial intelligence and competition: navigating open seas. Consultor Jurídico, October 19, 2023. Available at: https://www.conjur.com.br/2023-out-19/opiniao-ia-concorrencia-navegando-mar-aberto. Accessed on 09/11/2023.

[4] Available at: https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/. Accessed on 06/11/2023.

[5] "5.3. Promoting Competition. (a) The head of each agency developing policies and regulations related to AI shall use their authorities, as appropriate and consistent with applicable law, to promote competition in AI and related technologies, as well as in other markets. Such actions include addressing risks arising from concentrated control of key inputs, taking steps to stop unlawful collusion and prevent dominant firms from disadvantaging competitors, and working to provide new opportunities for small businesses and entrepreneurs. In particular, the Federal Trade Commission is encouraged to consider, as it deems appropriate, whether to exercise the Commission’s existing authorities, including its rulemaking authority under the Federal Trade Commission Act, 15 U.S.C. 41 et seq., to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI. […]” White House. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. October 30, 2023.

[6] Cf. White House, Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. October 30, 2023. "The actions taken today support and complement Japan’s leadership of the G-7 Hiroshima Process, the UK Summit on AI Safety, India’s leadership as Chair of the Global Partnership on AI, and ongoing discussions at the United Nations." Available at: https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/. Accessed on 06/11/2023.

[7] White House. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Sec. 5.2. (Promoting Innovation); Hiroshima Process Principle 9.

[8] White House. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Sec. 11(b) (Strengthening American Leadership Abroad); Hiroshima Process Principle 10.

[9] White House. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Sec. 4.5 (Reducing the Risks Posed by Synthetic Content); Hiroshima Process Principle 7.

[10] White House. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Sec. 6(b)(i).

[11] White House. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Subsection 7.1(b)(ii)(B).

[12] “In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public.” White House, Fact Sheet: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. October 30, 2023. Available at: https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/. Accessed on 06/11/2023.

By using our website, you agree to our Privacy Policy and our cookies usage.