The use of predictive systems has become wider with the development of related computational methods, and the evolution of the sciences in
which these methods are applied. The referred methods include machine learning techniques, face and/or voice recognition, temperature mapping, and other, within the artificial intelligence domain. These techniques are being applied to solve problems in socially and politically sensitive areas such as crime prevention and justice management, crowd management, and emotion analysis, just to mention a few. However, dissimilar predictions can be found nowadays as the result of the application of these methods resulting in misclassification, for example for the case of conviction risk assessment or decision-making process when designing public policies. The goal of this paper is to identify current gaps on fairness achievement within the context of predictive systems by analyzing available academic and scientific literature up to 2018. To achieve this goal, we have gathered available materials at the web of Science and Scopus from last five years and analyzed the different proposed methods and their results in relation to the bias as an emergent issue in the Artificial Intelligence field of study. Our tentative conclusions indicate that machine learning has some intrinsic limitations which are leading to automate the bias when designing predictive algorithms. Consequently, other methods should be explored; or we should redefine the way current machine learning approaches are being used when building decision making/decision support systems for crucial institutions of our political systems such as the judicial system, just to mention one.
Machine learning’s limitations in avoiding automation of bias
Publication Authors
Abstract
<< Go back to publications