How to ease algorithms clarity

Torresburriel Estudio
4 min readAug 19, 2021

In one of Apple’s latest Keynote, besides announcing new products, Tim Cook made a very powerful statement: the recognition of privacy as a Human Right:

https://twitter.com/torresburriel/status/1384555267036225537?s=20

Although this phrase may seem very grandiloquent, nowadays anyone can have their privacy at risk for multiple reasons, many of them derived from how difficult it is to know how data are actually collected and used.

That is why it is so important to apply the principles of Privacy by Design in the development of any product, but we would like to focus on an issue that is very important in the context in which we currently find ourselves: algorithms and data automation processes.

Explaining automation

Although automation has great advantages for any business (time savings or increased conversions), it can lead to damaging consequences in some areas.

And if automation is controversial in businesses, they are even more questionable when it comes to the public sphere, the Administration.

Algorithmic discrimination is only one of the possible consequences of applying automated decisions, but it is the most visible outcome of them all. The application of predictive modelling in areas such as loan granting or licensing has even caused the downfall of governments and executives.

This makes it essential to be able to explain what these algorithms are, how they automate decision-making and how to report potential problems related to their application.

Both explainability and the right to compensation are included in the GDPR. And the Spanish Data Protection Agency expressly includes it in the section Significant information on the logic applied in its document Adaptation to the GDPR of processing operations that incorporate Artificial Intelligence. An introduction.

In this context, the document outlines what is considered to be meaningful and how to show it to the user:

  • Details of the data used for decision making, beyond the category, and in particular information on the time period of use of the data.
  • The balancing of each of them in the decision-making process.
  • The quality of the training data and the type of patterns used.
  • The profiling carried out and its implications.
  • Precision or error values according to the appropriate metric for measuring the accuracy of inference.
  • The existence or not of qualified human supervision.
  • Reference to audits, especially on possible deviations from the results of inferences, as well as certification(s) performed on the IA system. In the case of adaptive or evolutionary systems, the last audit performed.
  • In case the IA system contains information of identifiable third parties, the prohibition to process such information without legitimation and the consequences of doing so.

However, even when this information is shown, users may find it difficult to understand what data is processed, how it is processed and what the consequences are.

Improving the transparency of algorithms

To ensure that information is understood by users, we must first guarantee that the information is effectively understandable. For this purpose, we can consider that algorithms are not understood due to three causes:

  1. Intentional: there is usually an intellectual property right that affects it, such as the algorithms used by search engines (Google, Bing).
  2. Lack of technical skills: users are not able to understand how the algorithm works because they lack the necessary technical notions or skills.
  3. By nature: the amount of information and calculations processed by algorithms makes explanation near-impossible.

Considering that these three causes can happen simultaneously when designing any product where algorithms will play a key role in decision-making, it is important to ensure that all information is easy to understand.

In this case, including a FAQ section can help to explain how the information is collected and processed, as well as the possible consequences.

And, even if it is a challenge, developing an easy-to-read guide to ensure cognitive accessibility will also make the information easy to understand for anyone, regardless of their level of technical knowledge on the subject.

The use of effective storytelling can also help to build trust. Presenting this information together with a visual element to support the explanation will help the information to be better understood.

This will help ease the biggest friction point we face in UX: accessibility. If a person is not able to understand how something will affect them, they will probably reject it. This rejection may be motivated by fear, and providing understandable information will help them make more informed decisions.

Fear of how our personal data is used can lead to many people being left out simply because they do not understand what is done with it and the legitimate use of it, which in many cases is only used to send emails or display advertising based on the user’s behaviour.

Summing up, taking privacy into account as a design value and explaining what data is collected and how it is used will allow users to overcome these fears and gain confidence in the use of our product.

--

--

Torresburriel Estudio

User Experience & User Research agency focused on services and digital products. Proud member of @UXalliance