Free Midjourney Alternative for Architects

Resources

Today I’m going to take a look and explore the new AI platform of D5 Render called D5 Hi. And we will see how we can create these views from this simple screenshot.It is a browser-based application so you don’t need to download anything to use it. If you enter this URL, you can join the waitlist with your email address, and once you have access to it you can use it for free at the moment.

I will use this house as the base model to test it. I want to create an exterior view from this side so I will simply take a screenshot from the 3D model.

Once you launch the D5 Hi, we have a user interface like this. I think it’s pretty friendly and easy to follow. At the top, we can choose different scenarios between Architecture and General.

Under that, there is a box where we can describe what we want to create. We can also add a negative prompt with the Exclusion button. It works both as text-to-image and image-to-image. First, let’s try only with text.

Here there are a couple of options for the prompt box. If we click on this one, we find a list of possible keywords we can use in our description for view type, design type, style, architects if you want to use a similar style, mass types, materials, etc.I think it’s pretty useful to have them like this so I can just click some of them to create something I like.

And in addition to these keywords, after I put some description I can expand my description with this “Prompt Expand” option. I believe it uses the existing text with some kind of language model and adds more details that can fit the existing style. We have 3 options to choose from so it can be useful.

Let’s check the Advanced tab too before generating something to choose the image size.In the advanced tab, we have the settings for the aspect ratio and the image size. The maximum image resolution is 1024. And it adjusts the other dimension according to the ratio.

I will change the number of outputs to two for faster generation and hit generate. The speed of the generation is also related to the number of people using it at the same time but it was pretty fast. Actually faster than I expected. This is the actual speed of the video so you can see. But of course, it can vary from time to time.

And here are the first images, I mean I chose some random keywords for the prompts so we are not judging the design itself but the image is pretty nice as a composition and it mostly makes sense. There are no flying elements or weird parts.

Then I created a couple more with different aspect ratios. When you click a generation we have these options under the image like HD where you can upscale the existing image, image variation to create similar images to this one, or download the image if you like it.

HD

Original

Under the text description tab, we have the tab where we can choose additional LoRa models if we want to. When you click it you can see the ready-to-use models, like Multistory which you can use to add more details on the facade of the buildings, or Zaha Hadid style to have these kinds of vibe in your final image.

I will try this illustration one to create illustration-like images. After you select a model you can adjust the effect factor scale from here. I changed my prompt for this one and generated some images.

I really liked the images with this additional Lora model, they all have a super cool vibe and style. Here are some of them.

And now I will use the image reference option to create conceptual renders of my project. We have two sections under the image reference, one for the diagram style where you can upload a reference image to create similar style images. And the Structure one, where I will upload the screenshot I took from the base model. Again with this slider, we can adjust the scale.

But if you click on this arrow we can control even more factors. You can choose between sketch, wireframe, and shape options depending on your input. For this one, I chose wireframe and shape ones.

Similar to this, we can have more control over the creativity type if we click on this arrow. Then we can change it between creative and accurate. With this slider, we determine how much it will stick with our prompt and how much it will have freedom over the generation.

Lastly, I added a simple prompt for a house with copper coverings and described the surroundings a bit, and hit generate.

The results are more accurate to the actual design than I expected to be honest. It is a useful tool for the visualization of conceptual ideas to see more realistic views.But of course, it has its own limitations too. Like we have very little control over the materials in our design. If you want to have a certain type of material on a specific place in the building it would be pretty challenging to achieve it.

Or I think it would be really hard to work on a landscape design with specific plants, flowers, and trees. So it is better to combine these different workflows than choosing one side, like either AI or render engine.

It would be great to see the idea in the early days of the design with a similar workflow and later you can work on a more precise model and add the material you want like this. Same for the landscape design too.

I believe D5 is planning to combine this AI feature directly into their render engine and I am really excited about that. That can be a super cool addition to it.

Here are some of the final images I created both from a 3D model and an interior sketch.

I hope you liked the video and the D5 Hi. The link for the platform is in the video description so don’t forget to give it a try yourself.

Thank you and see you in the next one!

Reply

or to participate.