Artificial intelligence – or AI – has been around in video games since almost their very inception. However, the time has also come for a leap for AI-controlled agents, which is what the video game Gran Turismo boasts.
Speed around a French village in Gran Turismo, and you might spot a Corvette behind you trying to catch your slipstream. Using the draft of an opponent’s racecar to speed up and overtake them is favored by skilled players of PlayStation’s realistic racing game.
However, this Corvette driver is not being controlled by a human – it’s GT Sophy, a powerful artificial intelligence agent built by PlayStation maker Sony.
Gran Turismo players have been competing against computer-generated racecars since the franchise launched in the 1990s. Still, the new AI driver unleashed last week on Gran Turismo 7 is more intelligent and faster because it’s been trained using the latest AI methods.
“Gran Turismo had a built-in AI existing from the beginning of the game, but it has a very narrow band of performance, and it isn’t perfect,” said Michael Spranger, chief operating officer of Sony AI. “It’s very predictable. But, once you reach a certain level, it doesn’t entice you anymore.”
But now, he said, “this AI is going to put up a fight.”
Visit an artificial intelligence laboratory at universities and companies like Sony, Google, Meta, Microsoft, and ChatGPT-maker OpenAI. It’s not unusual to find AI agents like Sophy racing cars, slinging angry birds at pigs, fighting epic interstellar battles, or helping human gamers build new Minecraft worlds – all part of the job description for computer systems trying to learn how to get smarter in games.
But sometimes, they also try to learn how to compete for more in the real world. In a January paper, a University of Cambridge researcher who built an AI agent to control Pokémon characters argued it could “inspire all sorts of applications that require team management under conditions of extreme uncertainty, including managing a team of doctors, robots or employees in an ever-changing environment, like a pandemic-stricken region or a war zone.”
And while that might sound like a kid making a case for playing three more hours of Pokémon Violet, the study of games has been used to advance AI research – and train computers to solve complex problems – since the mid-20th century.
Initially, AI was used in games like checkers and chess to test winning strategy games. Now a new branch of research focuses on performing open-ended tasks in complex worlds and interacting with humans, not just to beat them.
“Reality is like a super-complicated game,” said Nicholas Sarantinos, who authored the Pokémon paper and recently turned down a doctoral offer at Oxford University to start an AI company to help corporate workplaces set up more collaborative teams.
In the web-based Pokémon Showdown battle simulator, Sarantinos developed an algorithm to analyze a team of six Pokémon – predicting how they would perform based on all the possible battle scenarios ahead of them and their comparative strengths and weaknesses.
Microsoft, which owns the popular Minecraft game franchise and the Xbox game system, has tasked AI agents with various activities – from steering clear of lava to chopping trees and making furnaces. Researchers hope some of their learnings could eventually play a role in real-world technology, such as how to get a home robot to take on certain chores without having to program it.
While it” goes without stating” that real humans behave pretty differently from fictional video game creatures, “the core ideas can still be used,” Tarantino said. “If you use psychology tests, you can take this information to conclude how well they can work together.”
Amy Hoover, an assistant professor of informatics at the New Jersey Institute of Technology who’s built algorithms for the digital card game Hearthstone, said, “there is a reason for studying games.” Still, it is not always easy to explain.
“People don’t always understand that the point is about the optimization method rather than the game,” she said.
Games also offer a valuable testbed for AI – including for some real-world applications in robotics or health care – safer to try in a virtual world, said Vanessa Volz, a researcher, and co-founder of the Danish startup Modl.ai, which builds AI systems for game development.
But, she adds, “it can get overhyped.”
“It’s probably not going to be one big breakthrough and that everything is going to be shifted to the real world,” Volz said.
Japanese electronics giant Sony launched its own AI research division in 2020 with entertainment in mind, but it’s nonetheless attracted broader scholarly attention. Last year, its research paper introducing Sophy made it to the cover of the prestigious science journal Nature, which said it could potentially affect other applications such as drones and self-driving vehicles.
The technology behind Sophy is based on an algorithmic method known as reinforcement learning, which trains the system by rewarding it when it gets something right as it runs virtual races thousands of times.
“The reward will tell you, ‘You’re making progress. This is good,’ or, ‘You’re off the track. Well, that’s not good,’” Spranger said.
PlayStation players will only get to try racing against Sophy until March 31, on a limited number of circuits, so it can get some feedback and go back into testing. Peter Wurman, director of Sony AI America and project lead on GT Sophy, said it takes about two weeks for AI agents to train on 20 PlayStations.
“To get it spread throughout the whole game, it takes some more breakthroughs and more time before we’re ready,” he said.
And to get it onto actual streets or Formula One track? That could take a lot longer.
Self-driving car companies adopt similar machine-learning techniques, but “they don’t hand over complete control of the car the way we can,” Wurman said. “In a simulated world, there’s nobody’s life at risk. You know exactly the kinds of things you will see in the environment. There are no people crossing the road or anything like that.”