From MLPs to APIs Seminar | 2024
First Exhibited: December 18, 2024
Program: AI-Developed Massing Model
Software Used: Python, Stable Diffusion, ComfyUI, Grasshopper
Role: Designer, Programmer
Professor: Adam Burke
Develop an AI-powered workflow to help designers achieve a certain goal.
Much of the current discourse around AI-generated architectural imagery relates to algorithms that output vignettes depicting a building’s exterior or interior. These AI models, due to their being trained on images of buildings, ultimately utilize a painter’s approach to representing architecture, in which the building’s exterior or interior is illustrated as one sees it but without an embedded understanding of spatial organization.
Less has been discussed about potential AI applications in design stages in which the proposal is ideated, designed, and refined. This process necessitates the creation of information-dense two-dimensional projections: plans, sections, and elevations. An AI model can be trained to understand these drawings, which reveal far more about the unique and important spatial qualities of an entire building than isolated renders of its exterior or interior.
Yet an AI model cannot easily be trained on plans and section drawings alone as they are rendered in a variety of styles and conventions. This means that the AI model will have a more difficult time understanding how to interpret, analyze, and extract design ideas from a plan if every plan has different
drawing conventions. Additionally, oftentimes plan drawings retain more detail than is necessary to understand the design of the building, which will confuse the AI as to what exactly in the drawings it should analyze.
The architectural diagram emerges as a solution to these issues. By filtering out the unnecessary elements of a plan or section and revealing the hidden ideas and organizational intents of a space, the diagram can
enable an AI model–much as it does a human–to understand what makes a built environment spatially interesting and functional. Once trained on such diagrams, a model is then enabled to design its own buildings that have an embedded understanding of form and space and are themselves
architecturally captivating.
MassingModel is a design workflow that quickly generates spatially interesting CAD models in Rhino3D. These models are like conventional massing models in their rough exterior appearance but would be more detailed and precise in their interior appearance, as walls, floors, and columns would all be modeled in the building. Such a workflow would automatically generate massing models that have an embedded architectural understanding of sections which can serve as a starting point for more beautiful, thoughtfully designed architectural spaces.

Workflow
The model is trained on a dataset of plan drawings of museums made with Stable Diffusion via the ComfyUI interface. The similarities are meant to ensure consistency between the items of the dataset and in turn leading to easier training later in the workflow. Once the plans are generated, some of them are selected and redrawn in a simple graphical style. Black pixels represent the building’s poche (walls, columns, and other architectural elements that would be “cut through” when making a plan drawing), and white pixels represent open space.


manually redrawn plan.
Further in the workflow, a pix2pix model (an example of a conditional generative adversarial network) is trained to be able to transform input plans from the synthetic dataset into simplified plans with the black-and-white graphic style, and then run the pix2pix model to make those redrawn simplified plans.


Once the synthetic dataset is redrawn by the pix2pix model, they are used as an input in a Grasshopper script that is able to make a simplified building model from the image as well as other user-specified inputs. The black-and-white graphic style proves to be advantageous here, as the script extrudes rectangles that correspond to the black pixels of the image while leaving along rectangles corresponding to the image’s white pixels. The result would be a Brep in Grasshopper that a user could then manipulate how they wish.


