Privacy, security, and competition

*Co-authored with Maria Beatriz Previtali. Originally published in JOTA.

**This is an AI-powered machine translation of the original text in Portuguese.

Among the principles that a legal professional encounters during their education, perhaps the most inexorable is that no regulation occurs in a vacuum: in pursuing certain objectives, it is natural for others to be potentially compromised.

The challenge lies not in favoring one interest at the expense of another, but in ignoring, at the moment of decision-making and regulation, that certain legal mandates may (or will) impact other related interests.

Recognizing these interdependencies is crucial for designing balanced and effective policies, particularly in areas of rapid advancement and broad social impact, such as Law and Technology. Some examples of these tensions are more apparent, while others are less perceptible.

A recurring topic of discussion in this context is the protection of children and adolescents in the online environment. This legitimate concern can take many forms, such as initiatives in schools or the creation of laws and codes of best practices for developing applications targeting this audience. A common conclusion on this subject is that the collection and processing of minors' personal data should be restricted.

The data minimization principle establishes that only strictly necessary information should be collected to protect individuals' privacy. Combined with the particular vulnerability of children and adolescents and their “best interests,” especially for children, it is not uncommon to argue that the data processed should be limited exclusively to what is strictly necessary for the functioning of a given online application.

However, the technical tools for child protection rely directly on access to data to identify and prevent risks such as abuse, exploitation, and exposure to inappropriate content. Monitoring tools and detection algorithms, in turn, require large volumes of information to operate effectively.

At stake is a balance between the volume of data necessary to properly monitor online activities, effectively identify suspicious behaviors, and prevent risky situations, and the assurance that minors’ personal information will be used appropriately and solely for the specific purpose of prevention and mitigation of potential harm. Here, however, a rigid application of data minimization (not as a principle, but as a strict rule) may prevent the collection of necessary information to effectively protect children online.

Without sufficient data, threat detection tools become ineffective (or significantly less effective), exposing children to substantial risks. Prioritizing one aspect (data protection) without considering the other (safety) can result in policies and tools that fail to adequately meet the goals of child protection.

Another example, less discussed but with significant security implications, is the tokenization of credit cards. This technology replaces card data with a unique token, enhancing the security of financial transactions. When using a digital wallet (e-wallet), a token is generated and transmitted to the merchant and later to the payment network, which converts it into the associated account number to process the transaction. These tokens are managed by a “vault” system that securely handles the re-identification of user tokens.

On one hand, the U.S. Federal Trade Commission (FTC) has questioned this practice as limiting competition, but there are also relevant security reasons for centralizing the technological system. Centralized tokenization provides a higher level of security, significantly reducing the risk of fraud and data breaches, while also facilitating threat monitoring and response, ensuring greater integrity in financial transactions.

Thus, requiring the opening of these vaults could compromise (or, at the very least, significantly reduce) transaction security. Diversifying token management may also increase fraud strategies, putting customer data at risk.

This creates a tension between the need to ensure maximum security in transactions and the promotion of a competitive environment. Each objective must be viewed in light of the impacts it may have on the other, as ignoring their interdependence could lead to negative consequences on both fronts.

Finally, some cases require consideration of even the extraterritorial and international impact of certain regulatory decisions—commonly referred to as the “Brussels effect” when originating from Europe.

This effect is illustrated in discussions surrounding various technology norms such as the European Union's Artificial Intelligence Regulation (AI Act), the Digital Markets Act (DMA), the General Data Protection Regulation (GDPR), among others. These regulatory frameworks are often seen by many agents (public, private, academic) as inspiration for other countries, largely due to their broad approach to these themes and their frequent pioneering role in regulating emerging issues.

In the specific case of the AI Act, certain technologies are banned—such as systems for classifying people based on personal traits or social behavior, predicting criminal tendencies, or inferring emotions in workplaces or educational institutions—or restricted, such as the use of facial recognition and biometrics in surveillance cameras for security and police investigations.

On one hand, the high demands imposed by the European AI Act are criticized, particularly given the potential for dealing with an “unpredictable” regulatory environment and increased operational costs. On the other hand, there is the risk of a regulatory gap allowing European players to develop and commercialize in third countries the same technologies that the European Union has decided to ban or restrict.

This scenario mirrors issues observed in other EU regulations (e.g., deforestation regulation) which, despite good intentions, risk inducing behaviors contrary to the regulation's objectives, particularly in Global South countries, due to a lack of adequate impact assessment of their extraterritorial effects.

Therefore, it becomes evident that failing to consider the interdependencies between different legal mandates can lead to undesirable consequences. A regulation focused solely on promoting competition without evaluating its impact on security could compromise the integrity of financial transactions and consumer trust.

In online child protection, an exclusive emphasis on data minimization could inadvertently expose children to greater risks, counteracting the protective mandates of data privacy legislation and child protection statutes. A regulation that is highly influential and protective in one respect (e.g., limited data collection) may highlight gaps and negative effects in another (e.g., reduced efficiency in the desired online protection of minors).

Such situations underscore the need to consider the adverse effects of well-intentioned objectives. The problem does not lie in favoring one interest over another but in failing to recognize and consider the implications of these choices beforehand to balance them appropriately.

It may not always be possible to achieve a perfect alignment between competing interests, but identifying potentially conflicting objectives and carefully analyzing the behaviors induced by different regulatory options is a necessary step in constructing a regulatory scenario that consciously chooses the path to be taken, rather than making decisions without evaluating the potential effects of a given option on other equally legitimate objectives.

By using our website, you agree to our Privacy Policy and our cookies usage.