Algorithms can reduce discrimination, but only with proper data
If self-learning algorithms discriminate, it is not because there is an error in the algorithm, but because the data used to train the algorithm are “biased.”
It is only when you know which data subjects belong to vulnerable groups that bias in the data can be made transparent and algorithms trained properly. The taboo against collecting such data should, therefore, be broken, as this is the only way to eliminate future discrimination.
Full article: Algorithms can reduce discrimination, but only with proper data