AI is sending people to jail—and getting it wrong
Modern-day risk assessment tools are often driven by algorithms trained on historical crime data. Using historical data to train risk assessment tools could mean that machines are copying the mistakes of the past.
Populations that have historically been disproportionately targeted by law enforcement—especially low-income and minority communities—are at risk of being slapped with high recidivism scores. As a result, the algorithm could amplify and perpetuate embedded biases and generate even more bias-tainted data to feed a vicious cycle.
Full article: AI is sending people to jail—and getting it wrong – MIT Technology Review