Alphabet’s DeepMind AI engineers have developed the AlphaCode artificial intelligence system, which can generate code and solve problems from programming championships. The neural network was taught to understand the problem statement and search for its solution.
Google engineers have taken a new approach to AI training. They assumed that the description of the problem is an expression of what the algorithm should do, and its solution is the same, but stated in a different language. First, the neural network was trained to understand task descriptions, and then to create program code based on its internal representation, says Habr.
Alphabet used a GitHub archive with more than 700 GB of code, as well as comments in natural language, to train Alphabet. Then DeepMind organized an internal programming championship, and AI was trained on its materials. It was shown the full cycle: the problem statement, working and non-working code, as well as test cases to check it. Engineers noted that this approach is not new, but this time more resources were allocated for training.
Initially, more than 40% of the solutions proposed by the neural network either required too much hardware resources, or the solution took too much time. After analyzing the code, the engineers found that when solving various problems, the system often used similar code fragments that produced the same answers with the same input data.
DeepMind eliminated incorrect options, and AlphaCode was able to solve problems at the level of programmers with experience from several months to a year. As a result, the neural network was among the 54% of the contestants who completed the tasks at the championship.
To improve the performance of the neural network, engineers introduced an automated check for 100,000 solutions offered by the system. This led to a proportional increase in the proportion of correct answers, but at the same time, the resource intensity of the computing system also increased. Initially, training required an amount of energy that was 16 times the annual requirement of the average American family.
As a result, the engineers concluded that the system offers correct solutions, but as the task conditions become more complex, its resource intensity increases. Thus, according to them, AlphaCode can act as an assistant to programmers, but is not capable of replacing them.
NIXSolutions reminds that in February, DeepMind first unveiled AlphaCode, which it says “writes computer programs at a competitive level.”
In May, the company released a “general purpose” artificial intelligence system that can be trained to perform many different types of tasks. The researchers trained the system, called Gato, to perform 604 tasks, including adding captions to images, engaging in dialogue, stacking blocks with a robotic arm, and playing Atari games.
In November, it was reported that Google launched a new secret project, Pitchfork, in which the company intends to train artificial intelligence to write and correct code.