NIXSolutions: Stable Diffusion Has Gone into Open Access

The Stability AI development team has announced the end of the closed beta testing phase of their Stable Diffusion neural network. Now free access to images generation is open for everyone.

According to MIXED, the source code for running the neural network is also in the public domain. However, you will need an NVIDIA video card with 6-7 GB of video memory on board to run Stable Diffusion on your own computer.

NIXSolutions

What can Stable Diffusion do?

Stable Diffusion is a collaboration between researchers from Stability AI, RunwayML, LMU Munich, EleutherAI and LAION. The neural network allows you to generate images according to the description and is a direct competitor to the long-known DALL-E 2 and Midjourney.

To generate images, you need to download a neural network and run it on your own server or home device. However, users can also test the operation of the neural network in a simplified version directly in the browser using the link.

What are the rules of use?

The model is released under the Creative ML OpenRAIL-M license. This is a license that allows commercial and non-commercial use. This license is aimed at the ethical and legal use of the model and assumes your responsibility in case of any problems with the legality of the generated content.

Interestingly, Stable Diffusion can create images of real people, which is not allowed in OpenAI and DALL-E 2, says AIN. Other systems such as Midjourney or Pixelz.ai can also do this, but do not achieve the quality seen in Stable Diffusion, and none of the other systems are open source.

Why is this interesting?

Generative neural networks that create images based on a general textual description have made a huge leap in quality, availability, and usability over the past six months. However, Stable Diffusion is the first to offer users an open license, open source code, and the ability to run it on a mid-range home computer.

Thus, Stable Diffusion offers researchers and interested users who do not have access to dedicated GPU servers the opportunity to experiment with a modern generative model, notes NIXSolutions. And when the model will work on a MacBook with an M1 chip from Apple and video cards from other manufacturers, it will be able to spread like the usual Adobe Photoshop. This simply did not happen before.