Generative artificial intelligence as an ally of misinformation

*This is an AI-powered machine translation of the original text in Portuguese

Generative AI applied to images has become a recurring phenomenon. In the initial instances (such as the image of Pope Francis wearing a puffer jacket), the surprise was due to the unfamiliarity with the technology (few people knew about the possibility of generating realistic photos through artificial intelligence). However, today, they continue to surprise due to the quality of the generated images. The fidelity of these images is so high that Google even announced a tool called "Google I/O" to alert users that images created by its AI models were generated by a computer.

Image manipulations are not new; they date back to the early days of photography and have evolved with technology. There are numerous cases of digital image alterations, and there are even studies on how this impacts the self-esteem of the general population. However, generative AI has made these digital image manipulations astonishingly convincing, enabling the creation of realistic images from simple text commands. This ease of use allows phenomena like misinformation to gain strength.

Misinformation through text is already known and has been studied for effective countermeasures. However, visual support for these misinformation initiatives adds a new level of apparent credibility to the messages. If the widespread circulation of text messages has been linked to serious information problems in the past [1], the potential impact of such text combined with an accompanying "photo" cannot yet be fully calculated. The worn-out saying "a picture is worth a thousand words" takes on a new meaning in this context. If "loose" and credibility-lacking texts are already capable of undermining trust in the media and information distribution, the consequences of these same texts accompanied by a corroborating "photo" could be much greater.

One of the tools used for this type of creation, DALL-E, uses machine learning and neural networks to generate images from textual descriptions. Recently, the large company Adobe announced the integration of a generative AI tool into the famous Photoshop program. The tool, called "Firefly," works similarly to DALL-E, generatively filling images, allowing users greater control over how images are altered and making these alterations directly within the program. For now, the Photoshop integrated update is in beta testing, with some restrictions: it is not available for users under 18 years old, and there are limitations in China. However, Adobe estimates that Firefly will be fully released to the general public in the second half of 2023.

Meanwhile, worldwide, the debate about the risks of generative AI is growing, and efforts to regulate this technology are accelerating. In Brazil, there is a bill in progress, Bill No. 2,338 of 2023, authored by Senator Rodrigo Pacheco, which addresses the use of Artificial Intelligence. Some even advocate for a halt in the development of the technology until regulation is established [2].

What can be done, then, while legal regulation of AI use is pending? Some experts [3] believe that the detection of AI use is an important intermediate step. Compared to regulation, which tends to be slower, investing in technologies that distinguish between human-produced and AI-generated content [4] could be a solution to mitigate the potential risks of the technology while regulation is still being developed. Losing the ability to distinguish between what was created by people and what was generated by AI can lead to various problems, from plagiarism to the creation of disinformation campaigns. On the flip side of this same coin is the question: what happens if one of these tools fails and indicates that a person's creation was actually made with artificial intelligence?

Another aspect of this discussion concerns who should invest in these detection technologies. The profits from detecting generative results are likely not comparable to those provided by creative tools, which could discourage private investment. To some extent, this is already seen in publicly accessible AI detection tools (e.g., GPT-Zero), which have questionable effectiveness. However, the proliferation of this type of technology may drive the need to identify AI creations, similar to what happened with the growth of the internet and the use of computer viruses, which made antivirus creation a necessary task. The fact is that the potential for harm and threats demands a response, and while legal responses are being developed, technical responses are necessary to help ensure the responsible development of the technology and to mitigate risks stemming from generative AI, such as the amplification of misinformation in the media.

References:

BEYER, Jan Nicola. The race to detect AI can be won. 2023. Disponível em: https://www.politico.eu/article/artificial-intelligence-ai-detection-race-can-be-won/. Acesso em: 13 jun. 2023.

BOHANNON, Molly. Photoshop Adds AI Image Generator As Concerns Over Fake Images Rise. 2023. Disponível em: https://www.forbes.com/sites/mollybohannon/2023/05/23/photoshop-adds-ai-image-generator-as-concerns-over-fake-images-rise/?sh=ecc28ff25a46. Acesso em: 04 jun. 2023.

Future of Life. Pause Giant AI Experiments: An Open Letter. 2023. Disponível em: https://futureoflife.org/open-letter/pause-giant-ai-experiments/. Acesso em: 18 maio 2023.

WEATHERBED, Jess. Adobe is adding AI image generator Firefly to Photoshop: generative fill will arrive in photoshop in the second half of 2023: Generative Fill will arrive in Photoshop in "the second half of 2023".2023. Disponível em: https://www.theverge.com/2023/5/23/23734027/adobe-photoshop-generative-fill-ai-image-generator-firefly. Acesso em: 04 jun. 2023.

 

[1] Elections in UK and US at risk from AI-driven disinformation, say experts: https://www.theguardian.com/technology/2023/may/20/elections-in-uk-and-us-at-risk-from-ai-driven-disinformation-say-experts

[2] See, for example, the famous open letter released in March of this year, titled "Pause Giant AI Experiments."

[3] e.g., Jan Beyer

[4] Such as DetectGPT and GPTZero for text and AI Image Detector for visual resources.

 

*Coauthored with Bruno Farage da Costa Felipe. Originally published in Conjur.

**Image Freepik

By using our website, you agree to our Privacy Policy and our cookies usage.