A group of artificial intelligence researchers Facebook, together with American and Taiwanese scientists have developed a new method for creating 3D photos, reports 3dnews. Similar algorithms for giving the images a three-dimensional appearance already exist, but the results they produce are spoiled by artifacts in the form of blurring and other distortions. Machine learning-based technology eliminates virtually all of these disadvantages, notes NIX Solutions.
The new neural network is capable of adding volume to two types of images. Firstly, it supports RGB-D format photos that can be taken with the iPhone’s dual camera, Kinect controller and other devices with the ability to determine the depth of the scene. Secondly, it can work with ordinary 2D images – the main thing is to pre-set the depth parameters.
The capabilities of the neural network were demonstrated by the example of randomly selected photos from the RealEstate10K set. To demonstrate the quality of processing 2D photographs, historical footage of the 20th century was taken.
Thanks to the introduction of Inpainting technology, developers were able to remove artifacts that occurred when creating three-dimensional images by other methods. During this process, artificial intelligence detects missing pixels and recreates them – as a result, there are no blurring or distortion in volumetric photographs.
More information about the work of the new neural network is written in an article published on the arXiv server. The developed technology makes less demands on the original images than all the others. Perhaps in the future it will become even better and will be used in virtual reality.