- Designer Input
- Posts
- Use Stable Diffusion & ControlNet in 6 Clicks For FREE (no GPU or coding skills needed…)
Use Stable Diffusion & ControlNet in 6 Clicks For FREE (no GPU or coding skills needed…)
Resources
You can start using Stable Diffusion only with 6 clicks without any expensive computer for free. And you don’t need any coding experience, just 6 clicks and you are ready to generate images.
You can also use any custom models, or extensions you wish, including ControlNet. So you can generate images like this from your sketches.
Let’s start.
First, click on the Google Colab link in the video description. Google Colab basically lets you run your code on the cloud using Google’s standard GPUs for free. You can also get a premium GPU with the Colab Pro subscription.
But even with the free version, you get a GPU with 15 GB Ram, so it is pretty nice. All you need is a free Google account.
On Google Colab, there are 6 sections. If you click on the show code, you can see the written code that will be executed when you press the run button. So, go ahead and run the first section. It will ask permission to access your Google Drive, because it will save the Stable Diffusion file there. Choose your Google account and click on allow.
It will take a couple of minutes to complete it. Once you see the green checkmark, you can run the second section to install Automatic1111 UI. Then, we can choose which model we want to download in this part. I will add and use the Realistic Vision V2.0 model, but for now, I choose 1.5 as a base model to download.
If you have the model downloaded to your drive, you can add the path here. I already have it on my computer, so I will add it later. Once it is done, you can check it from the folders here. As you can see here, it automatically created all of the necessary folders inside Google Drive and model 1.5 is here under the models’ folder.
In the next section, you can install ControlNet models. With a free Google account, you will receive 15 GB of storage. It is not enough to download all of them, ControlNet models but you can use around half of them without any problem.
In this example, I want to create renders from sketches so I will download only the scribble model.
Now you can run the last section and actually run stable diffusion. If you want, you can also add a username and a password word to your interface, but I won’t do it now.
Open this URL and we have the stable diffusion running and ready to use. Before I generate anything, I will add the Realistic Vision model to the models file so I can use it.
After I upload it, I restart the UI and it will generate a new URL. Open it and as you can see our model is ready to use here.
Also if you check the Scribble ControlNet model is here too.
First I will test it just text-to-image. Let’s use this prompt from the models examples. After you hit generate, you can see the process here.
Here is the result.
photo of coastline, rocks, storm weather, wind, waves, lightning, 8k uhd, dslr, soft lighting, high quality, film grain, Fujifilm XT3
(blur, haze, deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation
Now I will try ControlNet with Zaha Hadid’s this sketch and try to create a realistic render using this napkin sketch.
All of the images you generate are saved to the same google drive folder. You can find them here under the outputs folder.
And here are some of the results from the same sketch.
That was all for this video. I hope it was helpful to you. If you want to see more sketches to render the video, you can check out this video.
See you at the next one.
The post Use Stable Diffusion & ControlNet in 6 Clicks For FREE (no GPU or coding skills needed…) appeared first on Design Input.
Reply