YouTube’s recommender AI faces flak

New Ideas in MarketingEssential news for marketers, summarised by YouGov
July 07, 2021, 4:49 PM GMT+0

A recent study by Mozilla found YouTube’s AI continues to pull up piles of “bottom-feeding/low grade/divisive/ disinforming content”.

YouTube’s video recommending algorithm has long faced accusations that its AI algorithm amplifies hate speech, political extremism and conspiracy theories as it looks to pull people in a vicious cycle of clicks. Mozilla's study further confirms this hypothesis.

Though YouTube’s parent company Google responded by announcing a few policy tweaks and limiting the odd hateful content, there is a lot of ground to be covered. According to Mozilla, “… Google has been pretty successful at fuzzing criticism with superficial claims of reform.”

The Mozilla study found inappropriate content was a greater problem in non-English speaking countries. To fix YouTube’s algorithm, Mozilla suggested a combination of laws that mandate transparency into AI systems and protect independent researchers so they can interrogate algorithm impacts. The law should also empower platform users with robust controls.

Read the original article

[17 minute read]

Explore more data & articles