Sunday, February 8, 2026

Can an AI System Write Specifications?


Architects often encounter new tools that promise efficiency. Some deliver it. Others only shift the work. The rapid expansion of artificial intelligence has revived a familiar question in architectural practice: whether a machine can produce a complete set of written specifications for a building, formatted in accordance with CSI MasterFormat, with only limited direction from the architect. The premise is straightforward: provide the AI with a Design Development set or BIM model, supplement it with a UniFormat outline, and ask it to generate the rest. 

Before considering the question, it helps to define the scope of this discussion. My aim is not to evaluate the technical capabilities of AI or to predict its trajectory. I am describing the nature of architectural specifications as they function in practice, and why their essential characteristics do not lend themselves to full automation. The observations that follow come from the work itself, not from any claim to expertise in artificial intelligence. 

Conversations about AI-generated specifications tend to fall into two predictable camps: optimism that the technology will soon automate the task, and caution from specifiers who emphasize the role of human judgment. I am not positioning this essay within that debate. Instead, I am outlining how specifications operate in practice and why their structure resists full delegation to an automated system. 

With that boundary in place, the short answer is that an AI system can produce text that resembles specifications. The longer answer is that resemblance does not equal authorship, and it certainly does not equal responsibility. 

AI systems excel at producing language that follows a pattern. They can generate the familiar three-part structure of a MasterFormat section and fill it with plausible content. They can map a UniFormat outline to the appropriate divisions and suggest likely sections. They can expand common assemblies into generic descriptions suitable for a preliminary draft. For routine editing tasks, such as checking terminology, consistency, or cross-referencing, they already offer real assistance. 

But specifications are not primarily a writing exercise. They serve as instruments of service that carry intent, performance criteria, and contractual force. They allocate risk. They coordinate with drawings, consultant documents, procurement requirements, and the owner’s expectations. They reflect decisions that are technical, legal, and experiential. An AI system cannot distinguish between what the drawings show and what the architect intends. It cannot infer performance requirements from geometry. It cannot decide when a prescriptive specification suits the project or when a performance specification becomes necessary. It cannot judge installer qualifications, warranty durations, or the level of detail required to make a section enforceable. It can only generate text that sounds like something an architect might have written. 

A simple example illustrates the point: imagine a project where the drawings call for a mechanically fastened roofing system, but an AI-generated specification defaults to a fully adhered system because that assembly appears more frequently in its training data. The contradiction is not a technical glitch; it is a failure of judgment. The machine cannot know which document reflects the architect’s intent, and the architect must resolve the discrepancy regardless. The risk created by the mismatch remains entirely human. 

The gap does not arise from a lack of training data or processing power. It stems from the nature of the work. Specifications depend on judgment, and judgment depends on experience, liability, and the ability to weigh consequences. A machine with none of these cannot produce a specification in the professional sense. It can only imitate the surface features of one. 

Some may argue that future AI systems trained on liability-aware datasets could narrow this gap. Even if such systems emerge, the underlying issue persists. Responsibility for the document cannot shift to a tool that cannot bear it. The architect would still need to verify the content, and verification would still require judgment. More sophisticated text does not change the structure of professional accountability. 

It is still reasonable to consider what might improve. AI systems will become better at interpreting drawings and models. They will identify assemblies more reliably, compare documents more effectively, and flag inconsistencies more quickly. They may eventually serve as competent reviewers, for example, as tools that scan a DD set, identify missing sections, and highlight divergences between specifications and drawings. They may help maintain internal consistency across large projects. They may reduce the time spent on boilerplate. These gains are modest, but they are real. 

What they are unlikely to do is assume the role of the specifier. The profession’s obligations—clarity, coordination, and accountability—do not reduce to pattern recognition. Even if an AI system produced a document that looked complete, the architect would still need to verify every line. The liability would remain exactly where it is now. 

A more interesting question concerns the appearance of completeness. When a machine can create a document that sounds authoritative, the risk of misplaced confidence increases. A specification that reads smoothly but lacks enforceability poses more danger than an incomplete one. This shifts the architect’s labor from accountable authorship to comprehensive verification, a change in workflow that increases risk without reducing responsibility. 

For now, the practical answer is that AI can assist with specifications, but it cannot write them. It can organize information, expand outlines, and check for consistency. It cannot make the decisions that define the document. The distinction matters, especially as the tools grow more fluent. The risk is not that AI will replace the architect, but that it will produce documents that appear authoritative without meeting the obligations of the profession. 

Years ago, I argued that specifiers serve as the managers of a project’s essential information. That view has not diminished. If anything, the rise of AI has clarified how much of the work depends on judgment rather than text. (See my 2012 post, “Revenge of the Specifiers”). 

Architects have always adapted to new tools. This one will be no different. But the work of specifying—deciding what is required, why it is required, and how it must perform—remains a human responsibility. The danger is not that AI will write specifications, but that it will encourage architects to trust documents whose authority exceeds their judgment.

No comments: