There is a post on the Google Neural Networks and Machine Learning blog titled “Generating High Definition Images Using Diffusion Models.” New technology allows up to 16x higher image resolution while preserving key details.
As noted by AIN, the presented technology allows you to increase the resolution from 32×32 pixels to 64×64, and then up to 256×256. A 64×64 photo can be enlarged up to 256×256 and then up to 1024×1024.
High resolution image from 32×32 picture
Google has published a description of two algorithms. The first is called Super-Resolution via Repeated Refinements (SR3), or Super Resolution via Repeated Refinements, and it works well for upscaling portraits and natural images.
When used for 8x face upscaling, SR3 has a “blending ratio” of almost 50%, and the results of this algorithm are really photorealistic and resemble movie shots where characters can find out a car number or any other small detail from a low-quality surveillance camera recording.
Once Google was convinced how effective SR3 was for upscaling photos, the company went even further with a second approach called CDM (Conditional Class Diffusion Model). The second algorithm in several approaches increases the scale of the image by adding missing pixels, notes NIXsolutions.
Google has posted a set of examples showing cascading scaling of low-resolution photos. A 32×32 photo can be enlarged to 64×64 and then up to 256×256. A 64×64 photo can be enlarged up to 256×256 and then up to 1024×1024.
The potential applications for this technology are vast, from restoring old family photographs to enhancing medical images such as MRI or X-rays. But so far, Google does not provide access to the SR3 and CDM algorithms.