"Reasonably Relied Upon..." - The Growing Importance of Energy Modeling

Today's guest post was contributed by E. Mitchell Swann, P.E., LEED AP, a partner at MDC Systems


As a strong component of the sustainability initiative in buildings, energy use is rightfully taking its place as a leading metric in evaluating a building’s performance. Further emphasizing the importance of performance measurement is the expected roll out of an industry wide “Building Energy Performance” label which is intended to provide an objective comparison of energy use between buildings. Rating systems like Energy Star along with model energy codes look at both predictive energy use models and actual usage as crucial to determining a building’s true performance and rating. The USGBC’s newly issued LEED v3.0 rating system requires the initial certification, recertification and by extension the possibility of decertification of LEED buildings to be tied closely to comparisons of modeled and measured energy use over time.


In those rating systems where a project’s “end game” performance is evaluated against their ‘promise’ as presented in a predictive model developed during design, the quality of the prediction greatly influences the quality perception of the results.


On many projects energy modeling work is performed by a subconsultant to the design team or possibly as an independent member of the project team. The results of the modeling effort are extremely influential in making design decisions from site orientation to building envelope options to HVAC systems to control strategies. Clearly an error during the modeling stage can lead to major problems downstream especially with respect to energy use comparisons and possibly maintaining certification downstream.


But if an error is made, who do you turn to for redress? Obviously the first stop is at the door of the energy modeler. This might work in the independent team member scenario but not so well in the subconsultant situation. Dealing with a subconsultant to a member of the design team may require a ‘two stop’ stop. But what happens if the modeling consultant doesn’t have the liability insurance that design firms typically have? What if the error is not ‘caught’ until Year Two of operation and we discover that the system that was installed will never perform as modeled and the actual energy costs are expected to 15% higher than initially thought for the life of the building? What are the ‘damages’ incurred by the Owner? The cost of a building or system retrofit? The energy cost penalty for the life of the system? What about the engineer who reasonably relied upon the analysis provided by the energy modeler? If the initial model output is used to guide engineering and/or architectural designs on the project, is a ‘third party modeler’ providing design services? Do they need to be licensed as an architect or engineer?


These are just a few of the issues that can arise and while energy modeling has a fairly long history, it is common in the industry to consider models as a comparative tool used to evaluate design options, not to ascertain the precise amount of energy a building would consume in a year. The newly ‘codified’ need to compare prediction to reality would seem to introduce a new level of expected accuracy and with it potential exposure. Is this effectively a prediction (and promise?) of performance which traditional E&O insurance does not cover? Will insurance products need to be revised to accommodate this new potential risk?


Design professionals would do well to clearly define performance expectations and potential limitations on their design as well as key parameters and assumptions used in modeling facility operations. On evaluating performance downstream, there may need to be an ‘audit clause’ to allow the designers a chance to evaluate how the facility was maintained or operated and the impact on energy if there is a divergence between predicted and actual usage.


The importance of energy modeling and the importance of accuracy in modeling is growing especially if building ratings or certifications are linked to correlations between predicted an actual performance. Is this a good thing? Well, at one level it seems reasonable to require actual performance to be at or near what was “promised”. But it is also important to remember that construction is a complicated and multi-variable process with many inputs, pieces and actors. So a lot can happen that can impact final performance and it may be difficult to determine exactly all of the “whys”, “hows” and who if something doesn’t perform exactly as expected. And of course we all know how fickle the weather can be.