Sunday, April 5, 2026

Can AI Recognize Living Structure?

Sitting area in the Parents' Realm of the Sala House - Christopher Alexander, Architect (photo by Ekyono; file is licensed under the Creative Commons Attribution-Share Alike 4.0 International license).
 
For years I’ve wondered what role AI might play in architectural design—not in the routine domains of optimization, code compliance, or energy modeling, but in the harder question of how buildings acquire coherence, depth, and human resonance. Much of the current discourse avoids that question. It gravitates toward what is measurable and leaves the deeper structure of design unexamined.
 
That deeper question repeatedly leads me back to Christopher Alexander’s work and, more recently, to Nikos Salingaros’s explorations of whether large language models can detect “living geometry” in buildings.
 
Alexander identified fifteen fundamental properties that tend to co‑occur in environments we experience as alive. These appear in nature, in traditional buildings, and in the artifacts people cherish. They are not stylistic devices but a system of interdependent relationships:
 
  • Levels of scale
  • Strong centers
  • Thick boundaries
  • Alternating repetition
  • Positive space
  • Good shape
  • Local symmetries
  • Deep interlock and ambiguity
  • Contrast
  • Gradients
  • Roughness
  • Echoes
  • The void
  • Simplicity and inner calm
  • Not‑separateness
When these properties reinforce one another, the result feels coherent and grounded. Salingaros links them to human neurophysiology, arguing that our perceptual systems evolved to favor nested scales and relational order.
 
With this in mind, I tested an online tool called the 15 Fundamental Properties of Wholeness Analyzer by Danny RaedeIt is an admirable initiative: open‑source, accessible, and one of the few attempts to operationalize Alexander’s ideas. But it arrives without documentation on training data, methodology, or weighting. Experimenting with it suggests a modest image classifier that can detect visual signatures such as symmetry, repetition, and contrast, but struggles with the relational, multiscalar depth Alexander’s framework demands. A model can recognize what local symmetry looks like; it cannot easily evaluate how that symmetry strengthens centers across scales or contributes to overall wholeness. These limits reflect the state of current vision models more than the ambition of the tool.
 
Portion of an analysis of the Sala House image, by Danny Raede's 15 Fundamental Properties of Wholeness Analyzer.

A deeper constraint applies to any image‑only system: a single photograph is never a building. It freezes one viewpoint, one moment, one lighting condition. Many of Alexander’s properties are inherently experiential. Thick boundaries reveal themselves in section. Gradients unfold in movement. Not‑separateness depends on context beyond the frame. Positive space requires reading the shape of the outdoor room. Judging wholeness from one static image is necessarily partial.

After using Raede’s tool, I wanted a more relational reading. I turned to a large language model (Microsoft Copilot) and prompted it to analyze the same images through the lens of the fifteen properties: “Consider relationships across scales, massing, thresholds, centers, voids, interlock, context, and overall coherence.”

The difference was immediate. The model produced something closer to the kind of relational analysis an architect might offer. It could do this because it was not limited to pixel‑level cues. It could integrate the image, the conceptual structure of the fifteen properties, and the contextual hints in the prompt. This does not overcome the limits of a single photograph, but it allows the model to make sense of the available information in a more integrated way than a vision‑only classifier can. 

Design for a renovation and expansion of the McMinnville Public Library (1982). The original Carnegie library is on the left; the proposed addition is on the right, which opens toward McMinnville City Park.

To illustrate, here is its analysis of my 1982 project for an addition to the Carnegie Library in McMinnville (completed during my time as a student in the University of Oregon’s architecture program), based on my design drawings and the photograph of the physical model I built, seen above:
 
"The project exhibits a remarkably coherent alignment with Alexander’s fifteen properties. Its organization unfolds through clear levels of scale—from the civic park and axial approach to the outdoor room, lobby, reading rooms, and smaller alcoves—each nested within the next. The original Carnegie library and the new addition form strong centers, stabilized by the cloister‑like outdoor room and the layered thresholds that create thick boundaries between inside and out. Exterior spaces are shaped as positive space rather than residual voids, and the interlock between old and new is handled with care: axes align, proportions echo, and the addition adapts to its context with an intentional roughness that reflects the site’s particularities. Gradients of enclosure and light develop through the sequence of plaza → cloister → lobby → reading rooms, while the repetition of structural bays, window groupings, and spatial rhythms creates subtle echoes across scales. The underlying parti—House of Reading and House of Books—gives the building a simplicity and inner calm, and the reciprocal relationship between the volumes expresses a genuine not‑separateness. The coherence of the whole arises not from stylistic gestures but from a relational structure that resonates strongly with Alexander’s fifteen properties."

A vision‑only classifier cannot produce this kind of relational reading.

These experiments raise a larger question: could any future tool reliably detect living structure? Even with far more capable models, the answer is probably no. Wholeness is emergent. It arises from iterative negotiation among site, program, structure, light, materials, craft, and human inhabitation. These relationships unfold across scales and through time. Many require embodied, situated judgment. AI may one day illuminate hidden patterns or diagnose coherence more reliably, but the deeper life of a building is unlikely to spring from software alone. Design remains a recursive field grounded in construction’s realities.

Even so, the potential is real. If future systems combine multiscale visual reasoning, spatial‑relationship understanding, datasets annotated for the fifteen properties, and insights from environmental psychology and cognitive science, AI could become a useful diagnostic partner. It might help architects see strengths, weaknesses, and missed opportunities. Whether Alexander’s properties remain the definitive rubric is debatable. They endure because they describe persistent, cross‑cultural patterns rooted in human perception. But they demand a relational intelligence that challenges even skilled humans to articulate fully. Any meaningful AI engagement with them will require moving beyond feature detection toward something closer to architectural judgment.

My own exploration echoes themes in Salingaros’s work on living geometry’s measurable effects on cognition and wellbeing. The takeaway is modest but clear: relational coherence in buildings is not an aesthetic preference; it has consequences for how people perceive and inhabit space. For now, AI’s most immediate value lies in conversation, as a tool that helps us see our work more sharply, test assumptions, and notice relationships we might otherwise overlook. In my experiments, it has already done that. If future systems can extend this capacity without losing sight of the relational nature of design, they may become useful partners in the ongoing effort to create environments that are coherent, grounded, and humane.

No comments: