• Designer Input
  • Posts
  • How to Create Design Variations with AI (3 methods)

How to Create Design Variations with AI (3 methods)

Resources

Hello there, today I want to show how you can edit your design with Midjourney’s new in-painting feature to create variations of it.

I have been experimenting with this for a while now and some of my favorite use cases are generating alternative versions of the facade design or imagining the same building in different environments.

In addition to that, I will show how you can use a similar workflow to edit your own images or real photos instead of Midjourney images.

First, I started to generate my base image from Midjourney. You can see the base prompt I have used here, I started with a pretty basic one and then kept iterating on top of it.

My goal was here to create an image from the side or corner in this composition, instead of a direct front view. This one is not bad, so let’s try to generate a different design for this corner part. Once you like an image, first upscale it, and then we need to click this “Vary (Region)” button.

After that, a pop-up window will appear where you can mark the area you want to edit and regenerate.

Make sure your “remix” mode is open from the settings, otherwise, you won’t be able to add a new prompt. You can mask out the area either with the rectangle or lasso tool. And then we can write the new prompt for the in-painting.

Since this prompt will only affect the area we marked, we can focus specifically on the new generation. So we don’t need to add other details for the environment, and surrounding.

Because it will analyze the base image and the new generation will fit the same style, environment, and lighting. Here are some of the options we generated. Of course, this is more of a conceptual study to see what can be done with this feature. In a couple of minutes, we will see how you can edit your own images to create variation ideas on top of them.

Let’s choose this one and now keep the newly generated part the same and try to see it in different environments by changing the surrounding part. I will do the same workflow but now will mask out the whole canvas but the middle part.

It does a really great job of understanding the overall mood and vibe of the image and generate something that will fit in. So it doesn’t look so artificial or edited. I kept repeating the same process for a couple of times more and here is the final version.

In addition to this, I have created a couple of other images like this one. In the other ones, my goal was to focus more on seeing how a building could influence its surroundings and urban context.

And how the facade design would affect the final look even for a relatively minimal project.

Of course, I am aware of the fact that most of these facade designs are not actually useful or doable. It will be related to the actual design of the building layout and the room position and sizes. But it could be fun to play with different kinds of facade options on your own project to come up with different styles, and designs.

Personally in my previous projects, coming up with different design ideas for this phase was always one of the more challenging parts of the process. But it’s not possible to do this with Midjourney because you can’t upload your images and keep them the same.

Let’s say this is the project we want to work on and we want to see different alternatives for this facade. I will simply open it on Adobe Photoshop to edit it with Firefly. And you don’t need to use the Beta version anymore because last week Photoshop got an update so now you can use Firefly with a commercial license directly there.

If you haven’t used generative fill before, all we need to do is select the area we want to regenerate with the selection tool and then choose generative fill, describe what you want to add, and hit generate.

Similar to Midjourney it will analyze the whole image and create something that will fit the style of the image. Adobe Firefly is definitely one of the easiest to use and most accessible tools for many people.

But it is quite limited at the moment because the only control we have is the text input and that’s all. That’s why if you want to explore more alternatives, you should definitely try this with Stable Diffusion.

It lets you have way more control over the whole process. For example, you can sketch something on top of the image and let the AI use that as input during the generation process.

Here is a quick example of that workflow. I will share a more detailed video about this workflow soon so please let me know if you are interested. And let’s see the final images from different platforms.

You can find more similar images, and videos on my Instagram.

Let me know what you think about this workflow. Would you use it in your own projects?

I hope you liked the video and see you in the next one.

Reply

or to participate.