When Pac-Man hit arcades on May 22nd 1980, it held the record for time spent in development having taken a whopping 17 months to design, code and complete. Now, 40 years later to the day, NVIDIA needed just four days to train its new GameGAN AI to wholly recreate it based only on watching another AI play through.
Dubbed GameGAN, it’s a generative adversarial network (hence, GAN) similar to those used to generate (and detect) photo-realistic images of people that do not exist. In general, GANs work by pairing two neural networks, the generator and the discriminator. The generator is trained on a large sample dataset and then instructed to generate an image based on what it saw. The discriminator then compares the generated image to the sample dataset to determine how close the two resemble one another. By cycling between these networks, the AI will gradually create more and more realistic images.
In GameGAN’s case, the generative network was trained using 50,000 play sessions of the game and then told to recreate it as a whole, from the static walls and pellets to the ghosts, Pac-Man himself and the rules governing their interactions. The entire process ran on a quartet of GP100s. GameGAN was not, however, provided with any of the underlying code or access to the game’s engine. Much like learning the rules by peering over your older brother’s shoulder as he played, GameGAN figured out Pac-Man based solely through watching the onscreen action and following the controller inputs as a separate AI played the game.
“There have been many AIs created in recent years, that can play games, they’re agents within these games,” Rev Lebaredian, NVIDIA’s VP of simulation technology, told Engadget. “But this is the first GAN that’s been created that can actually reproduce the game itself as a black box.”
As an NVIDIA blog posted on Friday explains, “As an artificial agent plays the GAN-generated game, GameGAN responds to the agent’s actions, generating new frames of the game environment in real time. GameGAN can even generate game layouts it’s never seen before, if trained on screenplays from games with multiple levels or versions.“
This is a similar creation process to procedural generation techniques, which have been around since the late ‘70s, but a far more efficient method. “So if you can think about the work that goes into creating a game like Pac-Man,” Lebaredian said. “There’s a programmer that has to sit there and really think about all of the roles and how they’re going to exactly describe the creation of this game, the creation of the maze and the interaction of all of the agents within that game. It’s painstaking work.”
“What this can help with is, we can have the GAN just learn what all of those rules are by observing,” he continued. “Ideally we would teach something like this GameGAN what the procedural rules are for the worlds you want to create.”
This could be as simple as, say, strapping a video camera to a car’s dashboard and going for a drive. GameGAN would be able to train on that video data and generate realistic, procedurally generated levels based on what the camera has seen.
This technique could also improve the development times of real-world autonomous machines. Since the robots employed in warehouses and on assembly lines can pose a threat to the safety of their human coworkers, these machines are typically first trained virtually so that if they do make a mistake, no actual harm is caused. The problem is that laying out these digital training scenarios is a laborious and time-consuming task. We could one day just train a deep learning model capable of predicting the consequences of its actions and use that instead.
“We could eventually have an AI that can learn to mimic the rules of driving, the laws of physics, just by watching videos and seeing agents take actions in an environment,” Sanja Fidler, director of NVIDIA’s Toronto research lab, said in a press release. “GameGAN is the first step toward that.”
NVIDIA’s GameGAN Pac-Man is a fully functional game that both humans and CPUs will be able to play when the company releases it online later this summer.