Generative AI: From the Moratorium Request to the Urgency of Regulation

*This is an AI-powered machine translation of the original text in Portuguese

Fear, insecurity, uncertainty, speculation. These terms are not new to those who have been following the debate about the development and use of artificial intelligence. The potential of AI has raised questions around the globe, some credible and others alarmist. Among the voices in the debate are magnates, billionaires, and even prominent intellectuals.

Almost a decade ago, physicist Stephen Hawking [1] expressed concerns about the high probability of AI being a threat to humanity: the real threat to humanity should not be seen as an external agent, but an internal one. The theoretical physicist argued that evil could result from our own creations, such as AI. Success in creating artificial intelligence would be the greatest event in human history, but unfortunately, it could also be the last unless we learn how to avoid its harms. Scientists were already pointing out that we should all ask ourselves what we can do to increase the chances of maximizing the benefits of artificial intelligence while avoiding its risks [2].

In another article, Hawking [3] reiterated that technology needs to be controlled to prevent it from destroying the human species. The true risk of AI would not lie in a "malevolent essence" but in its competence [4]. This is because an AI system can be extremely effective in achieving its goals, and if those goals are not aligned with ours as a society, we could have serious problems. Bill Gates and Elon Musk also supported this discourse, among others. While the former (Rawlinson, 2015) stated almost a decade ago that he didn't understand why people weren't yet concerned about the possibility of AI being a threat, the latter has always been convinced that we must be very cautious about artificial intelligence, as it poses the greatest threat to our existence. For Musk, there should be regulation on the matter, whether at the national or international level, just to ensure that we don't do something that jeopardizes our existence when creating AI systems [5].

The tension resurfaced with the popularization of generative AI [6] and its widespread use, following the launch of the image generator Dall-E and the text generator ChatGPT, both developed by OpenAI. Many of AI's capabilities were not perceptible to the average user, which meant that the debate on the development and use of AI did not receive much attention. The ease of access to AI provided by these and other applications even made the topic more recurrent in the media.

Recently, in March of this year, an open letter titled "Pause Giant AI Experiments" [7] caused a stir in the world of technology—and beyond, mainly because it was signed by thousands of individuals (today, as of the writing of this article, the letter has 27,565 signatures), including some "personalities" like Elon Musk, Steve Wozniak, Yuval Harari, and Tristan Harris.

The letter argues, among other points, that AI systems (with artificial intelligence that would compete with humans) can pose profound risks to society and humanity as a whole. One of the causes of these risks highlighted in the letter is the fact that AIs are becoming direct competitors to humans in general tasks. Therefore, the letter calls for all AI labs to immediately halt, for at least six months, the training of AI systems more powerful than GPT-4.

Although the letter raises valid concerns from the past, it has been seen as controversial by some, mainly due to two points:

Undeclared intentions: The letter is an initiative of the Future of Life Institute, an organization that has billionaire Elon Musk as one of its external advisors. In addition to having already donated millions of dollars for research purposes, Musk himself has shown interest in investing in AI development. For many, this fact alone would make the letter's request suspicious. Furthermore, Musk has created an AI company called X.AI to compete with OpenAI [8] through a competing system already named TruthGPT [9]. According to the billionaire, the goal is an AI that seeks the truth as much as possible and understands the nature of the universe. Arbitrary deadline: The reasons why the letter stipulates a six-month pause in the development of AIs are unknown. The fear of developing and using AI without regulatory parameters is real (and somewhat old). However, the timeframe indicated in the letter seems insufficient to address the regulatory issue. If it is not enough to better regulate the development of AIs, the question arises: could the discourse of caution (which is valid and necessary) be used to target competitors who currently have an advantage in the market race? One conclusion, however, is clear: the global trend towards regulating artificial intelligence is becoming increasingly necessary. Completely pausing the development of AI systems may be impossible, but there are ways to mitigate risks. To mitigate the potential harm and consequences of AI systems, it is necessary not to delay regulation more than necessary. In Brazil, the debate on AI is intensifying. The outcomes include Bill No. 2,338 of 2023, which addresses the use of artificial intelligence and follows the risk-based approach also adopted by Europe in the AI Act proposal. The benefits of AI are many, but not regulating this technology properly can, on one hand, cause avoidable damage (if underregulated) and, on the other hand, hinder the development of relevant technological applications (if excessively rigid).

[1] Em um artigo em coautoria com outros pesquisadores- Max Tegmark, Stuart Russell e Frank Wilczek

[2] HAWKING, Stephen; RUSSELL, Stuart; WILCZEK, Max Tegmark; Frank. Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough? 2014. Disponível em: <http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-atthe-implications-of-artificial-intelligence-but-are-we-taking-9313474.html>. Acesso em: 18 mai. 2023.RAWLINSON, Kevin. Microsoft's Bill Gates insists AI is a threat. 2015. Disponível em: https://www.bbc.com/news/31047780. Acesso em: 18 maio 2023.

[3] WHIPPLE, Tom. Stephen Hawking on humanity (and Jeremy Corbyn): The physicist is hopeful for the future - as long as the Labour leader goes. 2017. Disponível em: <http://www.thetimes.co.uk/edition/news/hawking-on-humanity-and corbynjk88zx0w2>. Acesso em: 18 mai. 2023.

[4] GRIFFIN, Andrew. Stephen Hawking: Artificial intelligence could wipe out humanity when it gets too clever as humans will be like ants: AI is likely to be "either the best or worst thing ever to happen to humanity," Hawking said, "so there's huge value in getting it right". 2015. Disponível em: <http://www.independent.co.uk/life-style/gadgets-and-tech/news/stephen-hawkingartificial-intelligence-could-wipe-out-humanity-when-it-gets-too-clever-as-humansa6686496.html>. Acesso em: 18 mai. 2023.

[5] GIBBS, Samuel. Elon Musk: artificial intelligence is our biggest existential threat. 2014. Disponível em: <https://www.theguardian.com/technology/2014/oct/27/elon-musk-artificialintelligence-ai-biggest-existential-threat>. Acessoem: 18 mai. 2023.

[6] A IA generativa usa um tipo de aprendizado de máquinas mais profundo, denominado de redes adversárias generativas (generative adversarial networks ou GANs, na sigla em inglês).

[7] Future of Life. Pause Giant AI Experiments: An Open Letter. 2023. Disponível em: https://futureoflife.org/open-letter/pause-giant-ai-experiments/. Acesso em: 18 maio 2023.

[8] VERGE, The. Elon Musk founds new AI company called X.AI. 2023. Disponível em: https://www.theverge.com/2023/4/14/23684005/elon-musk-new-ai-company-x. Acesso em: 18 maio 2023.

[9] SAUER, Megan. Elon Musk now says he wants to create a ChatGPT competitor to avoid 'A.I. dystopia'—he’s calling it 'TruthGPT'. 2023. Disponível em: https://www.cnbc.com/2023/04/19/elon-musk-says-he-wants-to-create-chatgpt-competitor-called-truthgpt.html. Acesso em: 18 maio 2023.

 

*Coauthored with Bruno Farage da Costa Felipe. Originally published in Conjur.

**Image Freepik

By using our website, you agree to our Privacy Policy and our cookies usage.