"Honesty in the Use of AI in Electoral Propaganda

*This is an AI-powered machine translation of the original text in Portuguese.

**The image used in this article was created by a generative artificial intelligence.

*** Originally published in JOTA.

The recent clash between Deputy Tabata Amaral (PSB-SP) and Mayor Ricardo Nunes (MDB) in the race for the São Paulo City Hall has brought attention to the implementation of Resolution 23,732/24 of the Superior Electoral Court (TSE), which establishes rules regarding the use of artificial intelligence (AI) in electoral propaganda. The deputy stepped back after Nunes' campaign objected and pointed out the illegal use of AI in a video where the mayor appears as the character Ken in a scene from the Barbie movie. But would such use of AI actually violate the TSE's Resolution?

The Resolution distinguishes three types of use, each with distinct consequences.

Firstly, there are what we might call "irrelevant" uses, outlined in Article 9ºB, paragraph 2, such as adjustments to improve image and sound quality, inclusion of graphic elements like vignettes and logos, or routine montages to compose photos of candidates and supporters, all elements already present in campaigns and audiovisual production, but which could yield better results and cost reduction through the use of AI. No specific obligations are imposed for these uses.

The second category, which we can refer to as "authorized uses," involves producing synthetic content with AI to create, replace, omit, merge, or alter the speed or overlay of images or sounds. In other words, elements capable not only of adjusting content or marking it with visual identity but also creating new ways of synthetically producing and manipulating texts, audios, and videos. For these uses, outlined in Article 9-B, transparency obligations are imposed to clearly inform voters about the use of AI.

The third category, which we can call "prohibited uses," outlined in Article 9º-C, paragraph 1, refers to synthetic content generated by AI to create, replace, or alter the image or voice of a living, deceased, or fictional person, whether to harm or favor a candidacy.

The distinction between actions of "creating, altering, or replacing images or voices of people" (prohibited by Article 9º-C, paragraph 1) and the possibilities of "creating, replacing, omitting, merging, altering speed, or overlaying images or sounds" (allowed by Article 9º-B, paragraph 1) is not clear. This is because the difference lies not so much in the action but in the intention and effects, as outlined in Article 9º-C, where such manipulations by AI are only prohibited when (i) they aim to disseminate notoriously untrue or decontextualized facts and (ii) have the potential to cause harm to the balance of the election. When these elements are present, manipulation is prohibited, whether it aims to harm an opponent or benefit the candidate.

The differentiation is not based on the type of action or AI technique used but on the deceptive intention and effects, which introduce elements of subjectivity and indeterminacy. When should we consider an intention malicious or deceptive? And when could the effect effectively alter the balance of the contest?

The answer goes beyond analyzing whether the use was consented to or not because, in the presence of those two elements, the prohibition applies to uses aiming to harm the rival as well as those favoring the candidate.

As a prohibited prototype of unauthorized use detrimental to a rival, imagine a candidate airing a deepfake of the opponent saying something unpopular or scandalous just before voting. Conversely, an unauthorized use aimed at benefiting the candidate, clearly prohibited, would be present in a deepfake of a popular politician or celebrity endorsing a specific candidate.

In these two cases, the deception is clear, as is the potential to change voting intentions, a result of misinformation. Recognizing this aspect is crucial because it reveals that the TSE's Resolution is not against the use of artificial intelligence per se but against misinformation capable of skewing the contest, produced with AI assistance.

After questions were raised, the video aired by the deputy was altered, using a less sophisticated technique (placing Nunes' photo over the character's face in the movie scene) to avoid alleged illegality. But if placing a photo over a face is not illegal, why would producing the face substitution effect with AI be? No matter how seamlessly integrated the face is with the "Ken" character, no one would believe it is indeed Mayor Ricardo Nunes dancing in the movie scene. The video's point was not about the scene but about the pun between the character's name "Ken" and the question "Who," questioning the mayor's popularity in response to criticism about the candidate's experience, all within the democratic confrontation of mutual opinions about the contenders' qualities, prompting reflection among voters.

The TSE's jurisprudence has been quite cautious, aiming to preserve candidates' freedom in conveying ideas and forms of expression while prohibiting only untrue and offensive content [1]. On the other hand, it preserves the legitimacy of satirical and humorous use in the name of freedom of expression [2], as also confirmed by the Supreme Federal Court [3]. The TSE's Resolution did not seek to change this understanding regarding manipulations involving AI. It wouldn't make sense to advocate for the free use of humor and satire but not when the technical means involve AI.

There remains the hypothesis of consented use to manipulate the image or audio of the candidate.

Consider a candidate with speech impediments using AI to improve the fluency of their speech. Even if there's potential for this manipulation to change voting intentions, would this be an illegitimate use? Although AI creates something distinct from reality, it seems the goal is to better convey ideas rather than hide speech deficiencies. And alterations to the candidate's appearance? Minor aesthetic changes may not seem problematic, but manipulations could create a wholly positive image disconnected from reality, misleading voters.

Within this spectrum of possible manipulations of image and voice for positive self-promotion or negative propaganda against others, the answer isn't simple and depends on contextual analysis. Ultimately, the evaluation of the honesty of the manipulation comes into play. What AI brings anew is the perfection of the result. The question is whether this perfection was specifically used to deceive and if the induced mistake has the potential to change voting intentions. Such assessment can and should be anticipated by the party or coalition, with an adequate governance structure to guide candidates and verify borderline cases, considering that, under Article 9 of the Resolution, all these actors can be held accountable.

[1] Rp No. 060130762. Rel. Min. Carlos Horbach; Rel. designated Min. Maria Claudia Bucchianeri. Judgment: 18/05/2023. Rp No. 060137257. Rel. Min. Floriano de Azevedo Marques. Judgment: 28/09/2023. Publication: 17/10/2023; Rp No. 060137257. Rel. Min. Floriano de Azevedo Marques. Judgment: 28/09/2023. Publication: 17/10/2023; f-Rp No. 060135873. Rel. Min. Maria Claudia Bucchianeri. Judgment: 25/10/2022. Publication: 25/10/2022.

[2] E – Rp No. 060114652/DF. Rel. Min. Carlos Horbach. Judged on: 20/04/2023. Published on 12/05/2023; TSE – R-RP No. 060096930/DF. Rel. Min. Carlos Horbach. Judged on: 20/09/2018. Published on 20/09/2018; position aligned with STF.

[3] ADI 4451/DF. Rel. Min. Alexandre de Moraes. Judged on: 21/06/2018. Published on: 06/03/2019.

By using our website, you agree to our Privacy Policy and our cookies usage.