Governance beyond regulation: How corporate models can contribute to responsible artificial intelligence?

*This is an AI-powered machine translation of the original text in Portuguese.

**The image used in this article was created by a generative artificial intelligence.

*** Coauthored with Claudia Pitta. Originally published in Jornal Jurid.

The explosion of Generative Artificial Intelligence systems this year has not only popularized this technology, which was already experiencing rapid growth but has also increased awareness and concern about its risks and potential social impacts. It has transformed the competitive landscape in this market. According to the data intelligence site Statista, the AI software market is expected to reach $126 billion by 2025, with an annual growth rate of 35%. While traditional machine learning models initially pointed towards a scenario of broad and fragmented competition, today, with foundational models such as large language models (LLMs), there seems to be a trend towards platformization, similar to what happened on the internet. This is because such large-scale models require significant computational processing power, a massive volume of high-quality and diverse data, and highly skilled and knowledgeable developers, all of which are high-cost assets. AI applications are expected to be developed from or enhanced by foundational models and fine-tuned for specific data domains, creating ecosystems with developers (complementors) orbiting around major providers of foundational models (orchestrators).

As artificial intelligence is set to be incorporated into various products across diverse markets and potentially used for data analysis to define competitive and marketing strategies, the concentration of economic power in AI could have future implications across various markets, perhaps in all of them.

In this scenario, beyond the challenge of ensuring reliable, responsible, and fair systems, AI poses another challenge to the corporate governance of companies that develop and operate these technologies: the potential market concentration. The greater the power held by dominant companies, the broader the applications derived from foundational models, and the greater the risks, the higher the demands for integrity, transparency, responsibility, fairness, and sustainability placed upon them.

Those who think that responsible development and use of AI depend solely on restrictive state regulation are mistaken. Considering the natural fear and distrust of users and those affected by a technology that mimics human capabilities, the reliability and responsibility of AI systems should be key elements in shaping the attractiveness of products, perhaps even more important than their performance. On the other hand, corporations themselves also have the ethical duty to develop self-regulation mechanisms and governance that guide the development of AI for the common good, as we are dealing with a technology capable of profoundly transforming economic, social, and human culture relationships. As stated in the 2020 World Economic Forum Manifesto, a company is "more than an economic unit generating wealth"; "it satisfies human and social aspirations as part of a broader social system."

It is not surprising, therefore, that two of the main competitors of OpenAI, Anthropic and Inflection AI, are established as Public Benefit Corporations, a special corporate model provided for in the legislation of some U.S. states and other countries. In Delaware law, a Public Benefit Corporation is "a for-profit corporation that is intended to produce public benefit(s) and to operate in a responsible and sustainable manner," meaning it assumes the statutory obligation (and therefore enforceable by any stakeholder) to generate public benefit(s) alongside returns for shareholders.

Given the potential benefits of technology for various fields of human development, the risks to fundamental rights, and a potential market concentration, at least for foundational models, it makes sense for such orchestrating companies to take the lead, at least within the ecosystem of applications derived from their models, for the responsible and sustainable development and operation of artificial intelligence, for the benefit of the public and humanity. Although experience shows that this commitment is not sufficient, it is certainly a good start. Benefit Corporations legislation requires a higher degree of transparency and accountability, allowing scrutiny by society at large.

Recognizing this leadership responsibility, some of these companies publicly committed to the U.S. government in July of this year to develop secure and reliable systems, adopting a series of measures in that direction. It is also noteworthy the open innovation initiative by some major providers, opening foundational models to independent developers, which can increase the reliability of systems and even bring more competition.

Other initiatives are already being tested to seek alignment of major AI companies with societal concerns, such as special capital structures, limitation of dividend distribution, statutory commitments, accountability, monitoring bodies, and various other governance and self-regulation mechanisms.

AI poses new challenges of ethics and governance. It is the responsibility of civil society, academia, and the private sector to discuss the issue, create alternatives, and pressure companies to assume and fulfill ethical commitments to the responsible development of AI. At the same time, balanced regulation reflecting the interests of different stakeholders and not hindering technology development should be demanded from governments. We are all participants in this future under construction.

By using our website, you agree to our Privacy Policy and our cookies usage.