Google LLC-owned YouTube said today that it’s in process of adopting new measures to tackle how misinformation spreads on the platform.
Chief Product Officer Neal Mohan said in a blog post that there were three main issues that need to be attended to: catching misinformation before it goes viral, addressing the problem of cross-platform sharing and curbing the spread not just in English but in languages all around the globe.
Per that last point, in January 80 groups of fact-checkers based all over the globe said YouTube wasn’t doing enough to prevent the spread of misinformation. In a signed letter to the company, the conclusion was that whatever YouTube was already doing was “insufficient.”
“YouTube is allowing its platform to be weaponized by unscrupulous actors to manipulate and exploit others, and to organize and fundraise themselves,” said the letter. “We urge you to take effective action against disinformation and misinformation.”
It seems this might have motivated YouTube into action. The company said that its combination machine learning and human tools already take down content pretty fast but admitted that it’s just not good enough and needs to “evolve,” just as disinformation evolves from 9/11 truthers to new vaccine conspiracy theories.
“To address this, we’re continuously training our system on new data,” said YouTube. “We’re looking to leverage an even more targeted mix of classifiers, keywords in additional languages, and information from regional analysts to identify narratives our main classifier doesn’t catch.”
Another problem is the sharing of what YouTube calls “borderline videos,” videos that don’t quite meet the requirements to get taken down, but content that YouTube feels shouldn’t go viral. YouTube said there’s a simple solution to this, and that’s disabling the share button or breaking the link on videos. “But we grapple with whether preventing shares may go too far in restricting a viewer’s freedoms,” said the company.
With that in mind, YouTube said perhaps a better approach could be attaching what it called a “speed bump” to such videos, a warning that tells viewers that this shared video they’re about to watch could contain some sketchy information. That way consumers gets to make up their own minds and perhaps the warning will encourage them to do some more research.
As for the third problem, the global issue, this one is tricky. “Cultures have different attitudes towards what makes a source trustworthy,” explained the company. “In some countries, public broadcasters like the BBC in the U.K. are widely seen as delivering authoritative news. Meanwhile, in others, state broadcasters can veer closer to propaganda.”
How do you label fake news when dictatorial governments are some of the most active spreaders of it? YouTube said the answer is likely to grow its teams in various countries so cultural nuances will be better understood.
Photo: Alexander Shatov/Unsplash
Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.
Source link