HUMBI
SONG


WORK

Generative Design Tools
︎ VR x AI Realtime Workflow
︎ VR x AI Sculpture
︎ Webcam x AI Realtime Workflow
︎ Nonlinear Human-AI Interaction
︎ Designing with AI?
︎ Generative 3DPrinted Bldgs
︎ Design Fictions in HCI

Interactive Works
︎ Attributes of Aliveness
︎ Pulsus
︎ Secret Garden
︎ HouseZero Geothermal
︎ The Jellyfish Robot
︎ Ghost in the Machine

Architecture/Urban
︎ The Hub404
︎ The Loop
︎ The Museum
︎ The Handrail
︎ The Inlay
︎ The Tower
︎ Student Works

Spatial Perception
in the Built Environment
︎ Mental Breadcrumbs
︎ Biometric Wearables
︎ Air Travel Design Guide
︎ Fragments of City Memories
︎ Airscape of CO2 Levels

ABOUT  ︎ 
TEACHING  ︎  
HOME  ︎ 


Mark




The Secret Garden


︎Design Process
︎Human x AI x Webcam
︎Computational Tools
︎Digital/Physical


The process of using generative AI image tools sometimes feel frustratingly mismatched with human design intentions and workflows. One contributing reason is because of the type of “inputs” these tools commonly accept; in which the designer tries to compress all sorts of information into simple text or image prompt. But designers rely significantly on embodied, tacit knowledge about materiality and contextual understanding about culture and society.  

Architecture and design are embodied practices. Designers think through making, learn through material experimentation, and generate knowledge through haptic feedback. As Polanyi said, “we know more than we can tell”; much of design expertise is resistant to being written down.

What are some alternative “inputs” that make use of bodily intuition, or tacit knowledge about how things are put together in the physical realm?



Context:


This work explores the usage of generative AI tools beyond the typical “one-click,” input-output generation of 2D image or 3D mesh/model, which can lead to issues of design fixation (difficulty thinking about alternate design solutions once you’ve seen a possible one), visual cliches, and erasure of individual diversity of perspectives into statistical averages. Yet there are also exciting possibilities of leveraging generative and analytical AI. There is a strong need to create better design tools that better support human design intention. This involves changes to the UI of these tools, better scaffolding around tool use, and a better understanding of different models of human-AI interaction in the design process.  
In our ACADIA 2025 publication, we established a theoretical framework for categorizing different modes of human-AI interaction that are in use today. The following series of experiments are examples of the second mode, a “real-time” paradigm of human-AI interaction,  which was introduced in that publication.



 


1. Markerboard -> Hand sketching landforms -> Realtime AI
Students: Sean Li, George Ma, Jagger Sun


 




2. Rhino screenshot + Physical model -> Turntable -> Realtime AI
Students: Heyifan Jin, Yupeng Gao, Ariel Adhidevara




Resulting physical concept / parti model: 









3. Found scrap model making materials (cardboard) -> hand manipulating physical models in front of webcam -> Realtime AI

Research Assistant: Jessica Chan 





4. Found objects (McMaster Carr hardware 3D models) -> Rhino  -> Realtime AI

Research Assistant: Zhelun Li






2 through 4 uses Krea.ai’s Realtime Tool.

3 and 4 is published in: 
Chan, J., & Song, H. (2025). AI & Found Objects: Translating Generative AI’s Materials Misinterpretations into Fabrication. Meta-Responsive Approaches. Proceedings of Conference of the Ibero-American Society of Digital Graphics (SIGraDi).






Mark