Throughout the existence of Parametric Monkey, we’ve been contemplating where the Architecture, Engineering and Construction (AEC) industry is headed and, more importantly, where it should be headed. It’s fair to say that our attitudes have changed during this time. As early adopters of BIM, we quickly skipped any culture of change issues and began looking at the bigger picture. Much of this search was driven by the fact that there seemed to be only a single narrative – CAD is bad, and BIM is the future. While acknowledging that BIM has brought about significant improvement to the AEC industry, we’ve never seen it as the end goal in fixing the deeply rooted inefficiencies in the industry. This article rethinks the current BIM process, which we’ve termed BIM 1.0, and postulates how it will evolve to BIM 2.0.
Progress is impossible without change, and those who never change their minds, cannot change anything.
George Bernard Shaw
Background
As a 20-year-old, third-year architecture student, I produced my first Construction Documentation (CD) set. The project was for a small house, nothing too complicated. Apart from my boss, who never seemed to be around much, I was the sole person on the project. And needless to say, it was a steep learning curve. Along with trying to figure out what information needed to be included in the documentation set, I was also learning AutoCAD. A double whammy.
The final task was to produce the window schedule. Without paying much attention to the task, I measured off the AutoCAD drawing, entered the values into the Excel schedule and then off the documents went out to tender. A few months later, I received a phone call from the builder. The timber-framed windows had arrived on site but didn’t fit. Naturally, we reviewed our documentation to ensure we weren’t at fault. Surely the builder or sub-contractor was at fault. Alas, this was not the case.
Lessons learnt
To my horror, I discovered that while all the drawings were correct, I had inadvertently mixed up the width and height dimensions in the window schedule for certain windows. The sub-contractor picked up some errors, but unfortunately, many they missed. I was instructed to go to the site first thing in the morning, explain the issue to the builder and try to find a resolution. The builder wasn’t happy. The client wasn’t happy. And I wasn’t thrilled about it either.
I spent the next few weeks working out which windows could be re-purposed through a façade re-design and which would need to be re-made. Not as straightforward as it seemed, as the design still needed to comply with the original development approval. I learnt my lesson the hard way. A perfect storm fuelled by a lack of diligence on my part and the non-existent Quality Assurance (QA) checks from multiple parties. Welcome to the construction industry!
Preaching to the BIM choir
Fast-forward six years, I was introduced to Revit and Building Information Modelling (BIM). Like Dorothy Boyd’s lust for Jerry Maguire, they had me at hello. Synchronised plans and elevations. Yes, please. But wait, there’s more. Synchronised scheduling. Hallelujah.
What had caused my project so much grief six years earlier was now effectively designed out through technology. I had become a convert and couldn’t fathom why other organisations hadn’t already adopted it. Instead, they were busy building business cases to check the return on investments and complaining about the cost of the technology upgrades that would be needed. There was some serious professional whinging going on.
Maybe those organisations had never had similar experiences. But I doubt it. We’re all human, and everybody makes mistakes. That’s why we have QA systems and professional indemnity insurance. More likely was the fact that BIM at that moment was very much in the (very) early adopter’s technology adoption stage, and they wanted to see how BIM adoption would play out. And playout it did.
BIM adoption
BIM adoption today is far and wide. When we examined the current use of BIM in Australia for the Australia Institute of Architects’ (AIA) ‘BIM and Beyond: Design technology in architecture‘ report, 70% of respondents reported using BIM for the majority of their projects. 20% claimed to use BIM for a minority of projects with only 9% of respondents not using BIM at all, with the remaining 1% unsure.1 Many other countries, I dare say, follow a similar distribution of BIM adoption.

History of BIM
But BIM’s history spans much further back than when I, or indeed much of the industry, was introduced to it. The term building model was first used in a 1985 paper by Simon Ruffle2, and later in a 1986 paper by Robert Aish3. Through BIM’s approximately 37-year lifespan, however, it has remained more or less the same. Software has come and gone, but the basic premise of a digital model functioning as a database remains the same. We may squabble amongst ourselves about the exact definition of BIM, even some 37 years down the track. Still, much like the phrase used in 1964 by United States Supreme Court Justice Potter Stewart to describe his threshold test for obscenity, we know it when we see it.

Since my enlightenment some 16 years ago, I have actively avoided CAD-based projects as much as possible. BIM was the future as far as I was concerned. Dipping my toe deeper and deeper into the BIM pool, I moved from role to role until, eventually, I was regional BIM Manager for one of the largest architects in the world. Around this time, I started to notice things I hadn’t previously noticed. Like an architect feeling a building’s material or looking at details that no layperson would bother about, I began noticing some serious limitations and conceptual flaws with BIM. Was it doomed? Absolutely not. But there was certainly room for improvement, and it needed more than tinkering at the periphery.
BIM’s utopian vision
BIM promised a new technological and workflow utopia. But this hasn’t played out as planned, as others have also noted:
BIM, as we know it, replaced the old 2D drawing workflow with the concept of creating a single 3D model to generate co-ordinated drawings. This utopian vision has not quite worked out the way it was planned. As nobody trusts anyone in this industry, there are multiple BIM models at multiple stages. Firms are drowning in drawings, which they might have edited in AutoCAD, and so break the BIM sync. BIM software has deepened the silos in which data sits in proprietary formats and designing anything big or detailed, or big and detailed, will seriously challenge your hardware.
Martyn day4
I would add that BIM 1.0 has been characterised by trying to connect and align design professionals via standardisations. We see this through the emergence of government mandates, Common Data Environments (CDE), and ISO 19650 standards. Indeed, some of the most brilliant technological minds in the industry devote the vast majority of their time to addressing these issues.
But instead of developing extensive protocols for clash detection, wouldn’t it be better to eliminate clashes altogether through automation? Instead of arguing about file formats, wouldn’t it be better just to create an API connection that can stream data? To return to my windows schedule error, rather than developing a better Excel template for AutoCAD, we need to rethink the entire digital process to design out the problem.
BIM’s philosophy
Central to the BIM process are shared philosophies. Given that these philosophies haven’t evolved much over the past 37 years, it is worth revisiting them to grasp if they are still fit for purpose. The two central concepts that I would like to focus on are integrated models and progressive model refinement.
Promise #1 – An integrated model
As a collaborative process, BIM promised that multiple parties would be able to contribute to creating the single source of truth. For instance, if the structural engineer changed the column location, this would be reflected in the architect’s model in real-time via platforms such as Revit Server or BIM360. Additionally, model authors would save considerable time and energy by simply importing manufacture-specific content into their model. Theoretically, all of this is possible, but the reality is much different.
Reality
Contractual agreements remain adversarial, and organisations are loath to agree to real-time integration for fear of third parties impeding their ability to deliver contractual deliverables. Instead of a live feed, most organisations have adopted a push/pull data transfer protocol, typically late on a Friday afternoon, with Monday dedicated to reconciling differences between the models. In other words, we accept model integration, but only when convenient.
Furthermore, most BIM Managers are reluctant to integrate third-party manufacture-specific assets for fear of file size bloating and difficulty ensuring graphical and scheduling consistency. Indeed, many organisations forbid the practice outright. Instead, the preferred workflow is either to keep their original generic asset or to rebuild the third-party asset to comply with their organisation’s BIM standards. Certainly, there are logical reasons for engaging in said practices, but it fundamentally undermines BIM’s philosophy. Put another way, current software limitations are preventing the enabling of the ideal BIM process.
Promise #2 – Progressive model refinement
Known by several aliases, including the BIM wheel of death or the Golden tread, the central premise is the same; a single BIM model can be used through all phases of a project – from inception through to facilities management. Understanding that model progression may not be in sync with traditional design stages – concept, schematic, detailed design, construction documentation, etc. – terms like Level of Detail (LOD) and Level of Information Needed (LOIN) came into our lexicon.

In Revit, for example, progressive model refinement is facilitated via family types. A generic ‘100m concept’ wall type can be initially used, and as new information becomes available, the wall type is swapped out for, say, a 90mm timber stud wall with 2x 10mm plasterboard finishes.


Unlike CAD, where elements needed to be redrawn, a BIM element could be continually swapped out, enabling the model to evolve. And once done, the model could be handed over for another to pick up where you left off. Thus, the model passes from design to construction to facility management. Again, all of this is possible, but the reality is much different and full of limitations.
Reality
Progressive model refinement is premised on the belief that a design problem is too complex to solve outright, so the process is chunked and resolved incrementally. We start with the macro and move to the micro. Coupled with the concept of progressive model refinement is waterfall project management.

In this model, critical paths and milestones are established, and contractual terms agreed to the effect of “work requested by the client to be revisited, or alterations requested to our documents after sign-off, will be regarded as a variation to the services.” In other words, once a decision is made and approved, it is difficult and expensive to revisit. To be more specific, BIM enables only one-way model refinement.
Take, for example, a LOD400 fabrication model of steelwork flooring. Changing this to mass timber would effectively involve deleting everything and starting from scratch. Or, to use AEC parlance, it is abortive work. While we may be able to substitute an element, such as a steel beam, current BIM software does not enable us to substitute a system, such as changing steel for mass timber.
To give another example, consider taking a LOD400 model and reverse engineering it to obtain the simplified building massing or environmental model. Conceptually, this is relatively simple – surely removing detail is easier than adding detail. But on a technical level, it is very difficult with current BIM software. Sure we can host elements to a mass, but invariably the relationship is broken, and the two models evolve independently. Or, as Gwyneth Paltrow so elegantly put it – conscious uncoupling. The golden thread, therefore, is not really a continuum but a sequential array of discrete threads.
Towards BIM 2.0
Many will argue that with appropriate project management planning, these situations shouldn’t arise. But we live in an uncertain world. Planning and building legislation changes. The cost of labour and materials rises and falls. Some materials may not be available without lengthy lead times. Or, quite simply, we might discover a better way of doing something too late in the design process. The fact is that whatever the reason, BIM and waterfall project management is simply not agile enough in uncertain circumstances. This limitation becomes particularly problematic when engaging with Modern Methods of Construction as it reverses the information flow whereby micro information is known in advance of macro information.
The Emperor’s new clothes
In Hans Christian Andersen’s The Emperor’s New Clothes, he tells the story of a vain emperor who gets exposed before his subjects. Two swindlers arrive at the capital city of an emperor who spends lavishly on clothing at the expense of state matters. Posing as weavers, they offer to supply him with magnificent clothes that are invisible to those who are stupid or incompetent. The emperor hires them, and they set up looms and go to work. A succession of officials, and then the emperor himself, visit them to check their progress. Each sees that the looms are empty but pretends otherwise to avoid being thought a fool.
Finally, the weavers report that the emperor’s suit is finished. They mime dressing him, and he sets off in a procession before the whole city. The townsfolk uncomfortably go along with the pretence, not wanting to appear inept or stupid, until a child blurts out that the emperor is wearing nothing at all. The people then realise that everyone has been fooled. Although startled, the emperor continues the procession, walking more proudly than ever.
The story has become a parable to describe a situation in which people are afraid to criticise something or someone because of the perceived wisdom of the masses. Criticising and challenging BIM is bad for business; everyone knows that CAD is bad and BIM is good. And therefore, if you are against BIM, you must be pro CAD. But are these our only two options? Is it really an either/or decision?
Embracing change
As mentioned above, BIM 1.0 has been premised on standardisation. But why? Why is standardisation so important? Yes, it provides a shared understanding and improves alignment, so we sensibly avoid reinventing the wheel on each new project. But if we think about it on a deeper level, BIM 1.0 is about minimising rework – agree on everything in advance and minimise abortive work. This may mean not having to rebuild a model saved in a native file format. Or it may mean not having to rename hundreds of drawings because someone changed the sheet naming convention. Either way, the thought process is the same: change is bad as it almost always means abortive work.
From standardisation to systematisation
According to the Susskinds, market forces, technological advances, and human ingenuity will drive professional work from craft to standardisation to systematisation, to finally, externalisation.5 As we’ve already discussed, the move towards standardisation is already well underway via BIM 1.0. Here, practical expertise is routinised for later reuse to avoid errors and ensure work consistency.

The third step, however, is systematisation, where systems are developed to assist experts or replace them altogether. Where standardisation involves reducing tasks to reusable paper-based routines, systematisation involves codifying knowledge in a machine-readable format. This is what we’ve coined BIM 2.0.
What is BIM 2.0?
Over the past few years, we’ve been contemplating how to achieve systematisation at scale. We started with the hypothesis, “How might we facilitate an agile design method that enables both macro and micro changes at any point in the design process?” In other words, if BIM 1.0 is waterfall-centric, what would an agile-centric design process look like?

Our experiment takes the form of MetricMonkey, software we have been developing to test our hypotheses. So how exactly is it different to BIM as we currently know it?
#1 – Digitalised knowledge
To achieve automation at scale, we must digitalise our knowledge. For too long, knowledge has resided in product catalogues, PDFs and the heads of professionals. By codifying this knowledge into an algorithm, we can share not just information but knowledge. Detailed information would be immediately retrievable, reducing any barriers to change and thereby fostering an agile and iterative design workflow.
Take, for instance, car parking. We could refer to AS 2890.1 (and pay $200 for the privilege) to discover the sizes and clearances required for off-street car parking. Alternatively, we could codify the logic so that anyone may automatically populate their model with a compliant design. And because the generation is automated, changing the layout results in no abortive work.
#2 – Self-aware, associative modelling
Any design change must automatically and associatively modify other related elements. For example, suppose the building becomes longer. In that case, the model should automatically update the number of stairs (based on egress distances), which should then update the spatial planning to account for the additional stairs. Similarly, a sprinkler system should automatically be added if the building becomes taller and exceeds the height threshold for sprinklers.
What is proposed here is more than a simple Revit adaptive component. It is a hierarchical topology where one design decision has a cascading effect on all other elements, including their instantiation, modification, and deletion. This concept is akin to, say, a Photoshop adjust layer or a 3DS Max modifier which enables non-destructive adjustments. Elements must therefore be self-aware, not just in geometric terms, but also in how they perform regarding to the applicable planning, building, environmental and manufacturing requirements.
#3 – Integration of Planning, design and construction
Planning legislation tells us where and what we can build. Construction legislation tells us how we need to build it. Between these two ends of the spectrum lies architectural design. If planning and construction legislation affect the design, we must connect these discrete silos to enable an agile-centric workflow. What is proposed here is more than mere compliance checking. It is fundamentally using these requirements to drive design generation.
Take, for instance, mass timber as a construction method. We could create a model, develop an automated auditing routine, and then determine if the design is compliant in terms of thickness, cantilevers, maximum spans, etc. Alternatively, because the knowledge has been digitalised (refer to #1), we can run multiple what-if scenarios to see how the design would change based on a different construction method. Such a process would finally allow architects to consider pre-fabrication during the design stages instead of retrospectively.
The same what-if process can also be run on the site’s planning requirements. How would our building change if the Floor Space Ratio or site setbacks increased or decreased? What if the zoning changed, how would this modify the floor-to-floor heights and the associated Gross Floor Area?
Rethinking and unlearning BIM 1.0
The purpose of learning isn’t to affirm our beliefs; it’s to evolve our beliefs.
Adam Grant6
At any given moment in time, innovative solutions become the status quote, and the cycle must repeat. Experience has the tendency to constrain us to prefer certain choices, processes, or actions, even when they become obsolete or irrelevant.7 In fact, Adam Grant claims that “Intelligence is traditionally viewed as the ability to think and learn. Yet in a turbulent world, there’s another set of cognitive skills that might matter more: the ability to rethink and unlearn.”8 He continues, “We laugh at people who still use Windows 95, yet we still cling to opinions that we formed in 1995. We listen to views that make us feel good, instead of ideas that make us think hard.”9
It is worth reflecting on the significant improvement BIM has brought to the AEC industry. But as a concept, BIM is ten years older than Windows 95. Yes, BIM has evolved over the years and will continue to do so. But as highlighted above, BIM’s underlying limitations and flaws are a glass ceiling to progress. If we are to advance, we must first unlearn what we think is the right way so that we may find a better solution.
Conclusion
BIM 2.0 will be about moving from standardisation to systematisation. Data won’t be shared through open-source file formats or COBie templates. Instead, knowledge will be captured, codified, shared via an algorithm, and made available instantaneously. This change will enable the industry to move away from a waterfall project management system to an agile-centric model, where change is embraced rather than resisted. While some will no doubt find it challenging to move beyond the status quo, we must keep an open mind and have the courage to yell from the top of our voices, “the emperor has no clothes”. BIM 2.0 is coming. It won’t happen overnight, but the (agile) wheels are already in motion.
References
1 Australia Institute of Architects (2021). ‘BIM and beyond: Design technology in architecture‘, p.13.
2 Ruffle, S. (1986). Architectural design exposed: From Computer-Aided-Drawing to Computer-Aided-Design. In Environments and planning B: Planning and design, March 7, pp. 385-389.
3 Aish, R. (1986). Building Modelling: The key to integrated construction CAD. CIB 5th International Symposium on the use of computers for environmental engineering related to building, 7–9 July.
4 Day, M. (12 Sept 2022). A tale of two (open) letters. In AEC Magazine.
5 Susskind, R. & Susskind, D. (2017). The future of the professions: How technology will transform the work of human experts. Oxford University Press, Oxford, p.196.
6 Grant, A. (2021). Think again: The power of knowing what you don’t know. WH Allen, London, p.26.
7 Soyer, E. & Hogarth, R. (2020). The myth of experience: Why we learn the wrong lessons, and ways to correct them. Hachette Book Group, New York, p.14.
8 Grant, A. (2021). Think again: The power of knowing what you don’t know. WH Allen, London, p.2.
9 Grant, A. (2021). Think again: The power of knowing what you don’t know. WH Allen, London, p.4.
4 Comments
Robin Drogemuller
John Haymaker et al identified the logic behind this in the 2004 paper “Perspectors: composable, reusable reasoning modules to construct an engineering view from other engineering views” Advanced Engineering Informatics Volume 18, Issue 1, January 2004, Pages 49-67
https://www.sciencedirect.com/science/article/pii/S1474034604000291
The basic idea is that the reasoning behind design is monotonic. If we undo a decision, we then have to check and possibly revise all of the decisions that depend on this decision
Paul Wintour
Thanks for the link Robin. I would love to read it, but it is behind a paywall. If you have a copy, do you mind sending it to me.
But I think the concept goes back to the very origins of BIM. In Robert Aish’s paper in 1986, he talks of components inheriting attributes of other components and terms these intelligent components. Possibly the difference to what I am proposing is that this intelligence happens at the system level, not just the compoennt level. But I’m sure there will be examples of others who have had similar ideas.
https://www.researchgate.net/publication/320347623_Building_modelling_the_key_to_integrated_construction_CAD
Brent
Outstanding article, and touches on the cornerstone of the whole issue…. contracts. With no change at the highest levels to address the contractual terms of the project, this needle just won’t move wholesale. I would like to see the impact of the UK level-2 2014 mandate
Paul Wintour
Hi Brent. I don’t see this as a contract issue at all. BIM 2.0 is a completely different way of working, independent of contracts or BIM mandates.