Google’s artificial intelligence (AI) branch, DeepMind, has developed a new AI,which remembers how it solved past problems and uses the knowledge to solve new ones, according to the Guardian.
DeepMind set the AI a range of tasks and it performed almost as well as a human. However, the AI is not a general intelligence in the sense that its use of past knowledge is limited. It works out which connections in its neural network have been the most important for the tasks it has learned so far. It then makes these harder to change as it learns the next skill.
The researchers made the AI play Atari games, including Breakout, Space Invaders and Defender randomly. They found that after several day of practice on each game, the AI was as good as a human player. However, without the process of using old memory to solve new problems, it barely coped with any of the games.
“If we’re going to have computer programs that are more intelligent and more useful, then they will have to have this ability to learn sequentially,” Deepmind research scientist James Kirkpatrick said.
Kirkpatrick said that without the ability to learn one skill on top of another, AIs will be at a disadvantage to humans and animals.