Join today’s leading executives online at the Data Summit on March 9th. Register here.


This week in AI, Sony — the Japanese electronics giant that’s still selling luxury MP3 players — announced Gran Turismo (GT) Sophy, an AI system that can outrace human drivers in head-to-head competition in the video game series Gran Turismo. Alphabet-owned AI research lab DeepMind revealed that YouTube is now using AI that mastered chess to compress streaming videos. And in other news, U.S. Sen. Amy Klobuchar (D-MN) and Cynthia Lummis (R-WY) introduced a bill that would mark Congress’ first major step toward addressing, as The Verge’s Makena Kelly puts it, “algorithmic amplification of harm.”

GT Sophy arrives as the AI field moves away from video games as a benchmark of progress, at least in some circles. But one expert told Wired that it marks a “landmark achievement for AI,” in their opinion because the techniques used to develop it could be applied to real-world self-driving cars. While autonomous cars currently rely mostly on manually-written software for control, GT Sophy — whose architecture is detailed in a paper published in the journal Nature this week — leverages AI to great effect.

Sony researchers worked with Polyphony Digital, the studio behind Gran Turismo, to train the system over the course of hours and hours. Through a technique called reinforcement learning, which “rewards” or “punishes” the system for making advantageous or disadvantageous decisions, respectively, GT Sophy learned to beat professional esports drivers without incurring penalties for unfair play.

GT Sophy could pave the way for games featuring AI players that are far more lifelike than many today. But beyond this (and potentially bolstering real-world self-driving cars), GT Sophy could help drivers to improve their skills. “Sophy takes some racing lines that a human driver would never think of,” Kazunori Yamau,  the creator of Gran Turismo and a real-life race car driver, told Wired. “I think a lot of the textbooks regarding driving skills will be rewritten.”

Video-compressing AI

In December 2020, DeepMind published a paper in the journal Nature detailing MuZero, an algorithm that picks up the rules of games like chess, shogi, and Go as it plays. At the time, the lab’s researchers said that they were planning to explore potential commercial applications, and now, it seems that they’ve found one: compressing YouTube videos.

DeepMind claims that MuZero was able to reduce the average amount of data that YouTube needs to stream to users by 4% without a noticeable loss in video quality. The system is now in active use across most — but not all — videos on YouTube, DeepMind says — mostly working to improve Google’s open source video compression codec VP9.

MuZero treats video compression like a “game,” trying to compete against its own previous attempts to compress video and improve on certain quality and bitrate metrics. Like GT Sophy, it’s a reinforcement learning system, but MuZero isn’t given any past examples of effective strategies, meaning it must learn through its own experience.

MuZero has led to surprising insights, for example ignoring a standard video compression rule that the bitrate should be maximized for the first frame in a scene and then 10 frames later. MuZero found that, for many video sequences, as long as the bitrate was maximized for one of these two frames, the other didn’t need as much bandwidth.

MuZero isn’t perfect, though. It struggles with slideshow-style videos, allocating more bandwidth to the transition frames while skimping on the slides themselves. YouTube engineers discovered this and fixed it through hard-coded rules for that kind of video, according to DeepMind.

DeepMind claims that MuZero could help people in countries with limited broadband to watch videos they’d otherwise struggle to view. “If you can compress videos and make them smaller, you can have massive savings on all internet traffic,” Dave Silver, who leads the reinforcement learning group at DeepMind, told VentureBeat in a previous interview. “This is something which we can apply our learning algorithms to and have a lot of the characteristics of the real world because you never know what you’re going to see next in the video.”

Algorithm regulation

Sens. Klobuchar’s and Lummis’ bill proposed before Congress aims to regulate a very different kind of algorithm than GT Sophy: those recommending content on social media. Called the Social Media NUDGE Act, the bill would direct agencies including the National Science Foundation to identify “content neutral” methods to slow down the spread of harmful content and misinformation, and it would task the Federal Trade Commission with mandating that social media platforms put these various methods into practice.

Notably, the NUDGE Act wouldn’t modify Section 230 of the Communications Decency Act, which protects web hosts and platforms from legal liability for content uploaded to their services. Last March, Reps. Anna Eshoo (D-CA) and Tom Malinowski (D-NJ) introduced a similar algorithmic regulation bill, Protecting Americans from Dangerous Algorithms Act, but one that would’ve put hosts and platforms on the hook for any “amplified content” found to violate civil rights.

“For too long, tech companies have said ‘Trust us, we’ve got this,’” Klobuchar said in a statement. “But we know that social media platforms have repeatedly put profits over people, with algorithms pushing dangerous content that hooks users and spreads misinformation.”

The NUDGE Act would go further than the European Union’s proposed AI law, “the AI Act,” which bans some uses of AI and heavily regulates “high-risk” uses like systems that determine eligibility for public benefits, credit scoring, and dispatching emergency services. While the AI Act would require providers and users of AI to comply with rules on data governance, documentation, transparency, accuracy, and security, it doesn’t treat the algorithms used in social media as high risk (unless a regulator deems it so on a case-by-case basis).

Given that Facebook’s own reports found that its algorithms have enabled disinformation and misinformation to flourish, bills like the NUDGE Act are overdue, says Public Knowledge’s Greg Guice. “Public Knowledge supports this legislation because it encourages informed decision-making to address a known problem: the promotion of misinformation,” he said in a statement. “Most importantly, the bill does all of this without tying compliance to Section 230 immunity.”

Time will tell, however, whether the NUDGE Act is able to attract the support of fellow representatives before the coming midterm elections, which will likely shift the balance of power in Congress. If it doesn’t, it could be some time before social media regulations make their way into law — given the political disagreements on the issue.

For AI coverage, send news tips to Kyle Wiggers – and be sure to subscribe to the AI ​​Weekly newsletter and bookmark our AI channel, The Machine.

Thank you for reading,

Kyle Wiggers

AI Senior Staff Writer

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More


Source link