AI-aided Design? Text-to-image Processes for Architectural Design


  • Matteo Flavio Mancini Department of Architecture, Roma Tre University
  • Sofia Menconero Department of History, Representation and Restoration of Architecture, Sapienza University of Rome



artificial intelligence, text-to-image, design drawing, authorship, stablediffusion


Artificial Intelligence (AI) is marking a turning point in many aspects of human life, and it is appropriate to question its potential use in the architectural representation processes.
This contribution provides a brief overview of the recent past of AI technologies to explain how they work, a snapshot of the current state of the art from text-to-image processes to image-to-3D processes, mainly focusing on the StableDiffusion platform.
It also offers an overview of the latest studies in the field of architectural design. The subsequent experimentation becomes an opportunity to showcase the potential of AI in the co-creation process and the ability to simulate various graphic techniques, up to photorealistic visualization. On the other hand, it presents the limitations that, at the current stage of development, sometimes invalidate the results of text-to-image processes concerning the scientific aspects of representation.
The conclusions reflect on the differences between human and artificial intelligence, the theme of shared authorship between humans and machines, and their consequences for architectural design.


Carpo, M. (2011). The alphabet and the algorithm. Cambridge - London: The MIT Press.

Carpo, M. (ed.). (2013). The digital turn in architecture 1992-2012. Chichester: John Wiley & Sons.

Carpo, M. (2017). The second digital turn: design beyond intelligence. Cambridge - London: The MIT Press.

Colton, S. et al. (2021). Generative Search Engines: Initial Experiments. In A. Gómez de Silva Garza et al. (Eds.). Proceedings of the 12th International Conference on Computational Creativity, Mexico City, 14-18 September 2021, pp. 237-246. Mexico City: ACC.

Crowson, K. et al. (2022). VQGAN-CLIP: Open Domain Image Generation and Editing with Natural Language Guidance. In arXiv. <> (accessed 18 July 2023).

Del Campo, M. (2022a). When Robots Dreams. In Conversation with Alexandra Carlson. In Architectural Design, No. 03, V. 92, pp. 47-53.

Del Campo, M. (2022b). Neural Architecture. Design and Artificial Intelligence. Novato: Oro Editions.

Dhariwal, P., Nichol, A. (2021). Diffusion Models beat GANs on Image Synthesis. In M. Ranzato et al. (eds.). Advances in Neural Information Processing Systems, V. 34, pp. 1-15. Cambridge: MIT Press.

Goodfellow, I. et al. (2014). Generative Adversarial Nets. In Z. Ghahramani et al. (eds.). Advances in Neural Information Processing Systems, v. 29, pp. 1-9. Cambridge: MIT Press.

Hegazy, M., Saleh, A.M. (2023). Evolution of AI role in architectural design: between parametric exploration and machine hallucination. In MSA Engineering Journal, V. 2, No. 2, pp. 262-288. (accessed 18 July 2023).

Hong, W. et al. (2022). CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers. In arXiv. <> (accessed 18 July 2023).

Jaruga-Rozdolska, A. (2022). Artificial intelligence as part of future practices in the architect’s work: MidJourney generative tool as part of a process of creating an architectural form. In Architectus, V. 3, No. 71, pp. 95-104.

Jun, H., Nichol, A. (2023). Shape-E: Generating Conditional 3D Implicit Functions. In arXiv. <> (accessed 18 July 2023).

Nichol. A. et al. (2022). Point-E: A System for Generating 3D Point Clouds from Complex Prompts. In arXiv. <> (accessed 18 July 2023).

Paananen, V. et al. (2023). Using Text-to-Image Generation for Architectural Design Ideation. In arXiv. <> (accessed 18 July 2023).

Ploennings, J., Berger, M. (2022). AI Art in Architecture. In arXiv. <> (accessed 18 July 2023).

Prix, W. et al. (2022). The Legacy Sketch Machine. From Artificial to Architectural Intelligence. In AD, Machine Hallucinations: Architecture and Artificial Intelligence, V. 92, No. 3, pp. 14-21.

Ramesh, A. et al. (5 January 2021). DALL-E: Creating images from text. <> (accessed 18 July 2023).

Radford, A. et al. (2021). Learning Transferable Visual Models from Natural Language Supervision. In M. Meila, T. Zhang (eds.). Proceedings of the 38th International Conference on Machine Learning. Virtuale, 18-24 July, V. 139, pp. 8748-8763. Maastricht: ML Research Press.

Reed, S. et al. (2016). Generative Adversarial Text to Image Synthesis. In M. F. Balcan, K. O. Weinberger (a cura di). Proceedings of the 33rd International Conference on Machine Learning, V. 48, pp. 1060-1069. Maastricht: ML Research Press.

Rombach, R. et al. (2022). High-Resolution Image Synthesis with Latent Diffusion Models. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, 18-24 June, pp. 10674-10685. New York: IEEE.

Singer, U. et al. (2022). Make-A-Video: Text-to-Video Generation without Text-Video data. In arXiv. <> (accessed 18 July 2023).

Tong, H. et al. (2023), An attempt to integrate AI-based techniques into first year design representation course. In K. Vaes, J. Verlinden (a cura di). Connectivity and Creativity in times of Conflicts. Cumulus Conference Proceedings. Anversa, 12-15 April, pp. 1-5. Antwerp: University of Antwerp.

Tsigkari, M. et al. (29 March 2021). Towards Artificial Intelligence in Architecture: How machine learning can change the way we approach design. In Plus Journal, <> (accessed 18 July 2023).

Wallish, S. (2022). GAN Hadid. In S. Carta (ed.). Machine Learning and the City: Applications in Architecture and Urban Design, pp. 477-481. Hoboken-Chichester: John Wiley & Sons.

Yildirim, E. (2022). Text-to-image generation A.I. in architecture. In H. Hale Kozlu (ed.). Art and Architecture: Theory, Practice and Experience, pp. 97-119. Lyon: Livre de Lyon.

Zhang, L., Agrawala, M. (2023). Adding Conditional Control to Text-to-Image Diffusion Models. <> (accessed 18 July 2023).



How to Cite

M. F. Mancini and S. Menconero, “AI-aided Design? Text-to-image Processes for Architectural Design”, diségno, no. 13, pp. 57–70, Dec. 2023.