Privacy watchdog asks biz to drop AI that analyzes emotions
Companies should think twice before deploying AI-powered emotional analysis systems prone to systemic biases and other snafus, the UK’s Information Commissioner’s Office (ICO) warned this week.
Organizations face investigation if they press on and use this sub-par technology that puts people at risk, the watchdog added.
Machine-learning algorithms purporting to predict a person’s moods and reactions use computer vision to track gazes, facial movements, and audio processing to gauge inflection and overall sentiment. As one might imagine, it’s not necessarily accurate or fair, and there may be privacy problems handling data for training and inference.
Source: Privacy watchdog asks biz to drop AI that analyzes emotions • The Register