Slaves to the algorithms?
The use of artificial intelligence in business operations is emerging more and more as a strategy for process efficiency, not least because of its ability to process huge amounts of data, even extracting new information from it.
These systems work with algorithms which, in most of cases, are able to learn by themselves how to improve their performance (so called “machine learning”) and autonomously search for the data they need.
Artificial intelligence A.I. Skills
AI analyses all available data related to a given situation, creating relationships between them and drawing conclusions in terms of expected scenarios and strategic decisions to be made in order to improve the business.
What Artificial Intelligence can’t do is at least two things:
- indicate possible scenarios when faced with new situations, for which AI does not have sufficient data to identify causal regularities that may lead to prefer one solution to the other and
- provide to its users details of the relationship it has established between data, therefore it cannot even explain the logical path that led to the decision.
Think about financial choices: by processing millions of variables, an impossible task even for the most experienced and skilled human director, the algorithm suggests the best investment choice but it is not able to explain why that choice is actually the best one.
This basically makes the algorithm a black box with enormous skills. The question, therefore, is to what extent company directors can deviate from the decisions suggested by the AI tools, without incurring liability for management decisions that later prove to be unsuccessful.
Moreover, it is necessary to assess whether there is even an obligation for them to depart from the AI suggested decisions when it seems to be appropriate.
The obligation to justify and the judicial assessment
The judicial assessment can never evaluate the worthiness of management decisions. The judge only weighs the logical path that the directors have followed in reaching that decision. This rule aims to preserve private autonomy and freedom of economic initiative.
This approach, however, is in contrast with the idea of a black box we talked about with respect to the algorithmic decision-making.
Since it is not possible to piece together the reasoning underlying the AI’s choices, it is not even possible to verify whether these choices were legitimate or not.
Therefore it is clear that management choices cannot be delegated to the algorithm, which can instead be used as a tool for human directors.
The conflict between Artificial Intelligence and human director
The board not only has to justify the decisions made, but it also has to justify the choice of the AI tool used in its work. If the human directors disagree with the algorithmic decision, they are obliged to deviate from the robotic choice by explaining the reason why.
If the human directors’ decision in contrast with the robotic one ultimately proves unsuccessful, they are exempt from liability if:
- they motivated the dissociation from the AI, and
- it is clear from the motivation that their approach was reasonable and diligent.
AI tools are a precious asset in business operations. However, until when technology will be able to make them fully transparent and so absolutely trustworthy, they should be used with the utmost caution. The human decision makers are responsible for their use and they will always have to explain why they trusted or did not the AI-advised decision.