Artificial Intelligence

AI learns from humans – including their biases

Machine learning is advancing rapidly. One US study, however, gives some food for thought: It reveals that artificial systems are also adopting their human authors’ biases.

26 Apr. 2017
Brain Processor of a Human Mind and Memory Concept
(Graphic: Shutterstock)

For their study , scientists at Ivy League university Princeton performed a psychology test intended to unveil biases. The subject was, however, not a person, but an algorithm, as reports science magazine scinexx.de , among others. The system learned on the basis of word associations from texts online. "Machine learning is a means to derive artificial intelligence by discovering patterns in existing data," write the researchers.

On its own, AI is sometimes not more intelligent than the humans who have written the texts. The study namely shows that the program also adopted the implicit or explicit biases of the relevant authors. Flowers and musical instruments were, for example, linked more to positive associations, and insects and weapons more to negative associations. The algorithm judged first names popular among African Americans in the US as more unpleasant than pleasant and first names associated more with white people as more pleasant than unpleasant. Female words, such as "Ms." and "girl", were associated more with the arts, while male words were associated more with mathematics. The scientists from Princeton concluded that the selection of datasets that AI utilizes to learn through machine learning is crucial.

Artificial Intelligence Deep LearningCognitive Computing CEBIT RSS Feed