Artificial intelligence and the LGBTQIAPN+ community

*This is an AI-powered machine translation of the original text in Portuguese

Legal protection for the LGBTQIAPN+ community has been continuously evolving, but there is still much to be achieved. Subject to repeated cases of discrimination, ranging from social exclusion to physical violence, it is crucial to be aware of the potential negative repercussions of the development of new technologies, especially for marginalized groups.

While the debate over technological regulation is essential, social minorities cannot lose sight of the need to discuss the different impacts that technology can have on them. To do so, it is necessary to understand the risks that artificial intelligence (AI) can pose to the LGBTQIAPN+ community, as well as the potential benefits that can be achieved through it.

The misuse of AI can significantly threaten the rights and well-being of LGBTQIAPN+ individuals. Despite undeniable advancements in creating and ensuring the rights of this community, progress is slow and uneven around the world; for example, one in three countries still criminalizes homosexual relationships.

In Brazil, despite the absence of criminalization, we have high rates of violence against LGBTQIAPN+ individuals. In this context, the widespread application of AI technologies can be problematic in various ways.

One proposed application of AI technologies is the identification of LGBTQIAPN+ individuals. The possibility of identifying sexual orientation and/or gender identity through technological means is inherently problematic. Some technologies that have been tested for this purpose include facial recognition and data analysis.

Currently, facial recognition algorithms can be used to identify individuals based on their physical characteristics. Facial recognition has been used in studies (widely criticized) that sought to correlate physical characteristics with sexual orientation and/or gender identity, as in the case where a U.S. professor used dating app profiles to train and test the technology.

Similarly, online behavior of each person can be used to classify them based on their characteristics (e.g., interests, browsing time, age, etc.). It is on this logic that content targeting algorithms personalize interests for each user. In recent years, technology companies have limited the use of characteristics such as sexual orientation and gender identity for these purposes in response to the positions of organized civil society groups questioning the appropriateness of such use.

Ignoring the discussion about the quality and accuracy of this type of identification (e.g., accuracy rate), both technologies raise relevant ethical issues. First, individual privacy is violated when someone's sexual orientation is identified without their knowledge. For example, using any technology to correctly or incorrectly identify a person's sexual orientation or gender identity implies handling sensitive information about that individual.

Therefore, such identification cannot occur without a proper legal basis, as provided by Article 11 of the General Data Protection Law. The issue is so deeply rooted that even the European Commission has drawn attention to the risks of discriminating against people who "appear to be gay," regardless of whether they are or not. In other words, privacy violations occur even if the inference made is incorrect.

Second, this type of system can imply discrimination and algorithmic bias against LGBTQIAPN+ individuals. If algorithms are trained with biased or unrepresentative data, they can perpetuate existing prejudices and stereotypes or generate new forms of discrimination.

The problems that AI can generate when trained with biased information are numerous. This failure has already been widely demonstrated in relation to Black individuals, such as in cases of incorrect identification of Black couples in photos or in the analysis of the probability of reoffending that disadvantaged Black individuals based on their ethnicity.

In summary, this type of problem can occur due to the lack of representation in data (e.g., insufficient information about a group) or biased representation (e.g., information that reproduces entrenched social prejudices).

In the case of the LGBTQIAPN+ community, the automated reproduction of these patterns can occur in different situations such as job selections, credit scores, content targeting, among others, some with greater or lesser potential to harm data subjects.

For example, if a resume screening algorithm takes into account sexual orientation and gender identity in its decisions—besides the need for consent and data protection considerations—there is a risk that the database used to train the AI is biased against LGBTQIAPN+ individuals, especially trans people, who historically have lower employability and less recognition of their professional skills.

Beyond these problems, from a practical perspective, this type of application can also be used in persecution of the community. For example, LGBTQIAPN+ dating apps (particularly those focused on homosexual relationships) have been repeatedly reported for their questionable security practices.

In recent years, reports have ranged from data leaks to the possibility of triangulating users (i.e., the ability of malicious third parties to accurately identify a person's location within meters on the app), and even the use of apps to actively persecute users in countries where homosexuality is criminalized. With the development of AI, this type of risk is exacerbated, facilitating malicious actions against this population.

However, AI also offers opportunities for significant advances in promoting the inclusion of the LGBTQIAPN+ population. For example, technology can be used to create safe virtual spaces where this community can express themselves and interact freely, to create relevant content for this population, to advance studies focused on their health specificities to promote relevant services for the needs of LGBTQIAPN+ individuals, etc.

Between potential risks and benefits, AI has become a relevant tool in our daily lives. The advancement of LGBTQIAPN+ rights in Brazil does not eliminate the challenges faced by the community but should help guide how we will explore the capabilities of this technology. It is essential that we are aware of emerging ethical challenges and seek the necessary safeguards to protect not only the general population but also social minorities in areas where new technologies impact them differently. The policies and regulations being discussed around the world should be established to ensure ethical and responsible development and use of technology, respecting human rights and the dignity of individuals.

 


[1] O acrônimo atualmente designa: lésbicas, gays, bissexuais, transgêneros, queer, intersexo, assexuais, pansexuais e não-binários, além de demais orientações sexuais e identidades de gênero não representadas por meio do “+".

[2]https://ilga.org/downloads/ILGA_World_State_Sponsored_Homophobia_report_global_legislation_overview_update_December_2020.pdf

[3] https://observatoriomorteseviolenciaslgbtibrasil.org/doacao/ong-lgbt/?gad=1&gclid=Cj0KCQjwtO-kBhDIARIsAL6LordhrVmJ7roRUu5zyrvnWlD9emKXVGHT2cyxmlKb1VhSDKSHsi8c3NQaAu4LEALw_wcB

[4] https://www.nytimes.com/2017/10/09/science/stanford-sexual-orientation-study.html

[5] https://www.forbes.com/sites/annakaplan/2021/11/09/meta-says-it-will-limit-ad-targeting-based-on-race-sexual-orientation-political-affiliation-and-more/?sh=2acda56b6ec4; https://support.google.com/adspolicy/answer/143465?hl=en

[6] https://www.corteidh.or.cr/docs/casos/articulos/seriec_315_esp.pdf

[7] https://www.scielo.br/j/rdp/a/sjf8hNGcJs3v9L7kf8y6GLt/abstract/?lang=en

[8] https://cordis.europa.eu/article/id/252276-why-possessing-a-gay-voice-can-lead-to-discrimination

[9] https://www.bbc.com/news/technology-33347866

[10] https://www.nytimes.com/2017/10/26/opinion/algorithm-compas-sentencing-bias.html

*Originally published in Conjur.

**Image vecstock.

By using our website, you agree to our Privacy Policy and our cookies usage.