In context: YouTube’s 22.8 billion visitors per month make it the world’s most popular website behind Google.com, which is what makes it such an enticing service for those spreading misinformation. It’s something the Google-owned site has long tried to combat, and it is now making a renewed push at stopping these narratives.
Neal Mohan, Chief Product Officer at YouTube, wrote an extensive post about tackling misinformation on the service. It focuses on three areas, the first of which is stopping these videos before they go viral. It classifies conspiracy theories such as claims 5G caused the spread of the coronavirus as an example of content that violates its guidelines, but some new narratives are too fresh to be caught by the company’s systems. As such, YouTube will “leverage an even more targeted mix of classifiers, keywords in additional languages, and information from regional analysts to identify narratives our main classifier doesn’t catch.”
YouTube’s four Rs of Responsibility
The second area of concern is the sharing of misinformation across platforms. YouTube says it has lowered the number of recommendations it makes for “borderline” videos that don’t quite warrant removal, but these are often promoted on other sites with links and embeds. The company has considered removing the share button or disabling links for these videos but worries that it may be going too far and restricting viewers’ freedoms. It is also considering an interstitial warning that the clip may contain misinformation.
Finally, YouTube is looking to better tackle misinformation in languages other than English. It notes that what’s considered borderline content varies in each country. One option is to partner with non-governmental organizations to better understand regional and local misinformation.
As with all internet platforms, YouTube must walk a fine line between banning anything it considers harmful and overextending its reach to the point where it’s accused of censorship. “We need to be careful to balance limiting the spread of potentially harmful misinformation, while allowing space for discussion of and education about sensitive and controversial topics,” Mohan said.
Source link