AI and data protection
It seems obvious to affirm that the use of artificial intelligence raises several issues with respect to the protection of personal data.
In fact, AI technology is characterized in that machines, software and algorithms may process very large amounts of data and information. As such, data processing is an intrinsic component of this technology, which has numerous applications in the most diverse areas, including private life. For example one may think of home automation, health or home management in general, with specific regard to the so-called digital assistants and smart-homes
Risk-based approach on AI by design and by default
Regarding AI and data protection, the importance of data within AI requires special attention to fundamental privacy principles, first of all the risk-based approach and the principle of data protection by design and by default.
Those processing data through AI-based systems must, in fact, address the protection of personal data before starting processing, by adopting appropriate security measures to protect the rights and freedoms of data subjects. In addition, measures must be adopted to ensure that only the personal data necessary for each specific processing purpose is processed by default, in accordance with the principle of minimization. In this regard, programming techniques and platforms centered on the consideration of security issues from the earliest design phases are widely known, as is the case, for example, with Kubernets.
Furthermore, the security measures adopted cannot be generic, but must be appropriate having regard to the intrinsic risk of each processing activity, taking into account the state of the art and the technical characteristics of the systems involved. It is therefore necessary that the controller of data processed through AI systems carries out a precise analysis of the risks aimed at the adoption of the most appropriate measures.
Automated decision-making processes
Among the most sensitive privacy implications of AI is that relating to automated decision-making processes, i.e. decisions based solely on automated processing, which may include evaluation of personal aspects and produce legal effects on persons.
The legislation is as cautious as ever with respect to such processes, since the data subjects are granted the right not to be subjected to processing activities that do not include human intervention.
However, the prohibition in question is not absolute, as there are some relevant exceptions, including the processing :
- for purposes of monitoring and preventing fraud and tax evasion;
- necessary for the entering or performance of a contract between the data subject and the controller;
- to which the data subject has given his or her explicit consent.
To secure protection of data subjects, in the case of automated decision-making processes, in addition to the risk analysis mentioned above, a specific data protection impact assessment must be carried out before processing can take place.
This is all the more inescapable, when AI is used as a tool for processing the underlying data. It should be remembered that one of the fundamental risks from the point of view of data protection is that AI makes conjectures and inferences about individuals on the basis of statistical recurrence or causal regularities in the observation of data, which can then lead to discrimination against individuals or social groups.
Transparency on AI and data protection
Another fundamental principle to comply with when using AI tools is transparency. The controller, in particular, must always explain to the data subject how it intends to process their personal data in a simple and timely manner. On the other hand, the GDPR principle of accountability requires that those processing personal data are in full control of the logic underlying each processing operation.
The data subject must also be given the opportunity to give consent, where necessary, in an easy and free manner. A possible denial of consent should therefore not preclude the data subject from making full use of AI-based services, but only of those for which consent to the processing of personal data appears necessary (think of a robotic assistant that can select music based on the preferences of the data subject).
A correct use of artificial intelligence cannot disregard the respect of privacy principles. Moreover, data controllers using AI should approach the issue critically, as they should assess the risks and actual impacts of processing operations on each occasion, by adopting and documenting all necessary measures. In this respect, there is consensus among practitioners on the criticality of AI systems for privacy. Among the technological solutions identified to prevent the most pernicious effects are federated machine learning (a reciprocal of the aggregation of masses of data into unique silos) and the so-called explainability, an area of IT research that uses sophisticated techniques to increase transparency in both simple systems, such as decision trees, and complex systems, such as neural networks.