AI and the "fear factor"

*Originally published in JOTA.

**This is an AI-powered machine translation of the original text in Portuguese.

Among the different risks posed by artificial intelligence, perhaps the greatest is that of underutilization, depriving patients of early diagnoses of serious diseases, allowing food waste due to inefficient logistics, missing better opportunities for planting and harvesting, perpetuating the backlog of cases in courts, reducing the availability or increasing the cost of credit, etc.

This risk seems inadvertent when authorities react impulsively to alarms against technology. I will refer to these intervention impulses as the risk of fear, which could limit AI development, and I will provide three examples.

Recently, the Brazilian National Data Protection Authority (ANPD) reversed a preventive measure that had ordered the suspension of Meta AI’s training using data from users of social networks operated by that company. The justification was the alleged threat of “irreparable harm” to the data protection of those users.

But what would be the harm to individual personality at stake when data is used to develop a large-scale computational model that, by its nature, derives absolutely general and depersonalized writing patterns from mathematical conversions of texts used in training?

Personal data protection was born to ensure the protection of individual personality in the public sphere, which could be threatened by the extraction of personalized information from data processed by third parties. In the famous census case, the German Constitutional Court deemed it legitimate to collect and process personal data, including sensitive data, of German citizens to create depersonalized public policies.

The problem lay in only one article of the census legislation, which provided for the use of data by municipalities to allocate children to schools.[1] Since that foundational milestone, data protection has evolved into legislation requiring justification for every stage of processing from collection, even if in subsequent stages the data is anonymized, potentially losing sight of the final application’s purpose.

Therefore, the ANPD demanded a legal basis for the stages of collecting, storing, and processing user texts on social networks, but without focusing on the purpose of developing a language model that, in the end, would be depersonalized. Nothing against ensuring the appropriateness of previous stages of processing, but not to the point of suspending the development of that application, founded on a non-existent threat to the individual personality of social network users.

After all, using large volumes of local texts in AI training not only brings more efficiency to the tool's user but also enables it to adapt to the local culture, mitigating the risk of “digital colonialism” (the immersion of digital technology or AI in data and values of the dominant Western culture).[2] The issue is not just about characterizing “legitimate interest” but questions the very applicability of the Brazilian General Data Protection Law (LGPD) to the implementation of AI that has depersonalized data at a certain link in the development chain.

Can personal data protection limit the advancement of artificial intelligence? This question, posed by the European Parliament to academia, received the following answer: no, as long as its interpretation focuses on the purpose of the reuse and inferences to be produced by the tool.[3] This concern became urgent, for example, due to the limitations that legislation imposed on the development and use of AI tools to combat the Covid-19 pandemic, based on the processing of personal health or geolocation data.

Rather than privacy being a constraint on AI, it should be viewed as merely one of the values, alongside social benefit, non-maleficence, non-discrimination, transparency, and trustworthiness, that compose the ethics of data processing for responsible AI development. The ANPD’s intervention stemmed from reading responsible AI through the lens of data protection, and not, as should be done, a reading of data protection in light of responsible AI.

Another manifestation of the risk of fear lies in the alarm over concentration in generative AI markets. In July 2024, following reports from the OECD and international antitrust authorities, the European Commission, the UK Competition and Markets Authority, and the U.S. Federal Trade Commission published a joint statement, warning of the risk of platformization of AI markets by big tech companies, similar to what occurred in digital markets.

This warning motivated CADE, in Brazil, to notify big tech companies to gather information on recent partnerships with AI-developing startups, regardless of their fit with concentration notification criteria. These initiatives seem to seek to prevent something similar to so-called killer acquisitions (acquisitions that led to concentration in digital markets, such as Facebook’s purchase of WhatsApp, not foreseen by antitrust authorities) in AI markets.

The fear of concentration arises from the requirements for developing generative AI, which include enormous amounts of data, scarce qualified experts, and large-scale computational resources, which are costly to implement.

However, parallels with the platformization of digital markets may be overestimated,[4] as some key factors considered responsible for concentration in digital services are absent. For example, economies of scale in AI markets may not lead to minimal or near-zero marginal costs, as in digital markets, since each AI system user increases the need for computational capacity.

Likewise, there do not appear to be significant network effects among system users (unlike social networks, adding new ChatGPT users does not add value to the tool for those already using it). Additionally, open-regime AI model development can impact competitive dynamics.

There’s nothing wrong with studying potential competitive impacts of startup acquisitions by big tech companies, but considering them killer applications in a dangerous platformization movement might precipitate interventions.

The third risk lies in the revision of the European AI Act, given the explosion of generative AIs. In previous versions, the assessment of AI as high-risk considered specific impacts of AI applications on individual and collective rights in activity sectors, such as medicine.

But how to assess the risk of general-purpose AIs? The AI Act’s solution was to introduce the concept of systemic risk measured by the volume of data and computational processing involved in AI training. Again, the focus is on the processing procedure rather than the purpose of the application.

Generative AIs are assessed as high-risk due to fear of their size, even without a clear view of their specific impact. For example, one of the biggest burnout factors among doctors is the excessive time spent on filling out forms and records.

The use of generative AI could significantly reduce this burden.[5] And automated filling, reviewed by healthcare professionals, does not appear to pose a significant risk. However, because it involves using generative AI in the medical field, it would be considered high-risk, requiring strict governance measures for its availability.

AI regulation and authority intervention must always consider the application’s purpose, its benefits, and actual risks, lest we materialize the greatest risk: the paralyzing fear of technology.


[1] MARANHÃO, JULIANO ; CAMPOS, R. ; ABRUSIO, J. Personal data protection at the Supreme Federal Court and the role of IBGE. Conjur, May 29, 2020.

[2] MOLLEMA, Warmhold Jan Thomas. Decolonial AI as Disenclosure. Open Journal of Social Sciences, vol. 12, no. 2, pp. 574-603, 2024.

[3] SARTOR, Giovanni; LAGIOIA, Francesca. The impact of the General Data Protection Regulation (GDPR) on artificial intelligence. Brussels: European Parliament, 2020. DOI: 10.2861/293.

[4] MARANHÃO, JULIANO ; BARROS, J. M. ; ALMADA, M. Artificial intelligence and competition: navigating open waters. Conjur, Oct. 19, 2023.

[5] Transforming healthcare with artificial intelligence: Redefining Medical Documentation. Mayo Clinic Digital Health Proceedings, vol.2, issue3, 2024. www.mcpdigitalhealth.org/article/S2949-7612(24)00041-5/fulltext.

By using our website, you agree to our Privacy Policy and our cookies usage.