Switch mode

3 reasons why interpretability is gaining importance in the machine learning world

What is machine learning?

Machine learning, ML for short, is a concept that, along with others such as Big Data or Deep Learning, has seen its popularity increase exponentially in recent years. The area of machine learning is a branch of Artificial Intelligence that aims to build systems that automatically learn from data. There are multiple examples of ML applications in our daily lives, such as:

Interpretability in machine learning models

In machine learning problems, the most accurate models are usually the most complex ones. This is precisely because these models look for patterns that capture relationships of a high number of variables in high dimensionality spaces and, therefore, are able to "see" relationships and information that is very difficult to capture and interpret by a human being. This is their main strength but also presents perhaps one of the main obstacles to overcome: the best models are usually the least interpretable. 

Until recently, the interpretability of these models had not been considered very relevant in the academic world. Usually the objective of most lines of research was to look for new models or ways of applying them that would achieve an improvement in model accuracy or goodness of fit. The fact that this would lead to a significant loss in the interpretability of the ML model results was not considered significant. Therefore, when applying a machine learning model to real problems, the following dichotomy always arose: accuracy or interpretability? Simple model, less accurate but easily understandable, or complex model, very accurate but "black box" style, i.e., I know what goes in and what comes out, but not what happens in between?

Fortunately, this way of thinking has changed in recent years and interpretability extraction is now a popular line of research. 

Interpretability in machine learning applied to medicine

At Sigesa we believe that this is the line to follow, especially in the healthcare field. Among the reasons why we believe in the importance of interpretability in the world of machine learning applied to medicine, we can highlight three:

  1. Knowledge extraction: ML models in medicine can not only be useful for generating accurate predictions, but also for generating insights that allow new knowledge to be generated or possible lines of research to be detected. As we will see in a future article, Interpretability vs. causality, it is important not to confuse interpretability with causality, but even so, the interpretation of ML models can provide us with relevant clinical or management information.
  1. Criticality: The area of medicine has some particularities with respect to other sectors. One of them is the special criticality in decision making. Taking the previous example, it does not have the same negative impact if Netflix makes a mistake when recommending a movie, as if a model that tries to predict, for example, the presence of a certain disease, fails in its prediction. For this reason, it is especially important to be able to analyze the causes of possible errors or unadjusted predictions of an ML model used in the medical field. 
  1. Ethical control: Analyzing which factors have contributed to the decision of a predictive machine learning model allows us to detect possible biases, by gender, race, age, etc., in an effective manner, which is especially relevant in the healthcare field due to the high criticality associated with it, as mentioned in the previous point.

Algorithms for extracting interpretability in ML

The interpretability of machine learning models is still at an early stage of research. However, some algorithms have already been developed and have shown very good results in different investigations and simulations:

At Horus ML we have developed our own explanatory algorithms, a mix of the above methods and internal developments, to generate interpretability for each of our ML models. For more information about our machine learning models or how these algorithms can help you gain interpretability on your predictions, please contact us.

Schedule a meetingLearn more