Artificial intelligence DeepMind learned to come up with photos
The British company DeepMind, in 2014 became a part of Google, is constantly working on improving the artificial intelligence. In June 2018 her staff provided a neural network capable of generating three-dimensional image based on two-dimensional. In October, the developers have gone further - they have created a neural network BigGAN to generate images of nature, animals and objects that are difficult to distinguish from the real photos.
As in other projects to create artificial images, this technology is based on the generative-competitive neural network. Recall that it consists of two parts: the generator and the discriminator. The first creates the image, while the second evaluates the similarity between them with samples of an ideal result.
In this article we would like to erase the boundary between the images created by the AI, and photos from the real world. We have found that this is sufficient for the existing generation methods.
To teach BigGAN create pictures of butterflies, dogs and food, used different sets of images. First used for training ImageNet database, and then - more ambitious set of JFT-300M 300 million images, divided into 18,000 categories.
BigGAN Education took 2 days. It took 128 tensor Google processors designed specifically for machine learning.
In the development of neural networks was also attended by professors from the Scottish University Heriot-Watt. Details on techniques painted in the article "Education
large-scale generative-competitive neural network GAN synthesis of natural images of high accuracy. "
In September, researchers from Carnegie Mellon University in using generative competitive neural networks have created a system for applying facial expressions of some people on the other person.
As such a neural network can be used by mankind? Your options please write in the comments or in our Telegram-chat.