Applications of Digital Workflows and Immersive Technology in Structural Engineering 

Jon LEACH*, Matthew BURTONa, Martin FOWLERb
*Director, AECOM Aldgate Tower, 2 Leman Street, London, E1 8FA jon.leach@aecom.com

a Principal Engineer, AECOM
b Regional Director, AECOM



Abstract

Immersive Technology is changing the way the AEC industry communicates and operates. This paper will discuss, through case studies and exploration of hardware and software developments, how immersive technology can be implemented in structural engineering workflows, its current limitations, and what is necessary to further its development and adoption by the industry.

Keywords: Digital Design, Immersive Technology, Optimization, Case Studies.

 

1. Introduction

The communication skills that we use as engineers are changing. Engineers have traditionally been taught to draw, and not taught to model or sculpt. Modelling geometry in a 3D environment, understanding basic additive and subtractive methods, as well as more complex geometry definition, are an increasingly important part of our education process and 3D visualisation and Immersive Technology is seen as an essential part of enabling this change.

Previously in building design the technology has been tentatively used for visualisation purposes and, despite some notable and innovative applications on construction projects, has been treated by many as a marketing novelty with limited practical use or application.

This paper will examine how immersive technology can be used as a collaborative tool, not just a novelty, and will discuss its application in a number of case studies, including its benefits, its current limitations, and how it already has a place as a valid communication tool within existing engineering design workflows.

Case studies will include the use of Augmented, Mixed and Virtual Reality on a diverse range of engineering and architectural projects, from large-scale stadia and airports to small-scale Pavilions and the review of individual connection details. Different combinations of software and hardware will be discussed, including multi-user and multi-location applications, all in the context of plugging-in to wider existing digital and parametric design workflows using single sources of project data for accurate and rapid model review.


2. Immersive Technology and Digital Workflows

In the popular consciousness immersive technology is nothing new. In the 1990s the release of the box office hit films The Matrix and The Lawnmower Man brought the concept of wholly immersive experiences that simulated entire worlds, and going even further back the concept of a future where the same technology is integral to all facets of our lives was presented in William Gibson’s futuristic noir novel, Neuromancer.
Proceedings of the IASS Symposium 2018 Creativity in Structural Design.

By 2002 Steven Spielberg’s Minority Report introduced a near-future in which Augmented and Mixed Reality become natural successors to the keyboard and mouse, and this year his adaptation of Ready Player One takes another step in this seemingly inevitable direction.

Whilst immersive technology has existed in one form or another since the 1980’s [1] it has only evolved to a point remotely comparable to its literal or cinematic equivalents in the past decade. The $2 billion 2014 acquisition of Oculus VR by Facebook [2] perhaps signaled the moment when immersive technologies began to be seen as a multi-industry commercially attractive prospect.

Around this time the AEC industry, historically slow to react to new digital technologies, began looking at the technology tentatively for visualization purposes and as a novelty. Consequently, it is only in the past couple of years we have seen how immersive technologies can be utilized as a collaborative tool that becomes a valuable part of project workflows.

Immersive Technology in this context is currently divided into two main forms – Virtual Reality and Augmented Reality. Virtual Reality (VR) produces a contained and fully immersive experience with high quality 3D visualisations that can be explored in real-time. The visualisations are usually viewed through a headset or other portable device, and navigated by moving your head for orientation and via a games controller or mouse for transporting the viewer through the virtual environment. The visualisations can be photo-realistic, as shown in Figure 1, and typically adopt gaming technology and real-time rendering software such as Unreal Engine, Unity and Stingray.

Example of VR model of 2017 Serpentine Pavilion used for visualisation and geometry optimisation ahead of a three week concept to fabrication periodFigure 1 – Example of VR model of 2017 Serpentine Pavilion used for visualisation and geometry optimisation ahead of a three week concept to fabrication period

Ever-improving screen resolutions for portable smart phones and tablets are also enabling VR experiences to be crafted using static 360 degree still images. Despite the obvious drawbacks compared to the completely interactive VR environment, this is a cost-effective and simple approach that is allowing VR experiences to be produced from existing 3D models and adding new benefits to projects workflows.

Augmented Reality (AR) also adopts a headset, or other portable interface, and overlays additional data and information about the real world into your field of view. This is typically tracked using digital image recognition through cameras supported by GPS and laser devices where the portable interface portrays information to the user like a heads-up display. More recent innovations have seen tablet and mobile phones displaying 3D information over camera displays to create an augmented view comparable to a headset experience. This information can include, for example, design reinforcement layouts, building service locations behind walls, real-time 4D information and work instructions about the programme status versus the as-built status, as well as way-finding information or machine diagnostics.

Other terms also exist, for example Mixed Reality (MR), which facilitates both VR and AR through further augmentation and interactive user interfaces.

Digital Workflows are another concept that has existed for some time but, beyond very basic spreadsheet and one-way interoperability methods, have only been gaining traction and importance in the AEC industry in the past decade. The primary purpose of a digital workflow is to take the data created in the development of a project, be it geometry, material properties, design parameters, analysis results or specifications, and to create a single source of common data that, through multidirectional interoperability between different software platforms, can be used for all aspects of the design through to final documentation, construction and operation in order to drive efficiency into the process and reduce error. Immersive Technology is just one branch of these workflows, but is certainly one that we see becoming very powerful for communication and future innovation in design optimisation.

 

3. Applications in Structural Engineering

As architects and engineers we work on increasingly more complex and challenging projects, and traditional techniques and processes for communicating design solutions that have worked on other projects can reach their limits. During the design of the Al Wakrah FIFA World Cup Stadium in Qatar, our architects and engineers in the UK were developing new visualisation and VR tools to demonstrate the emotion of the spectator experience and the views of the match to our client in a clear visual manner (Figure 2) rather than via numerical output and statistics. We realised that the quality of this experience for communicating an idea could be a powerful design tool as well as one used purely for client visualisation, and began using VR as part of our design review processes.

Early adoption of VR during design development of Al Wakrah StadiumFigure 2 – Early adoption of VR during design development of Al Wakrah Stadium

The team has since been using VR and MR on a number of platforms including the latest generation of HTC Vive headsets, which can provide a fully immersive experience while tethered to powerful computer hardware, and the mobile phone-based Samsung Gear VR which allows us to view high quality 360 degree renders and animated virtual environments from a series of fixed view-points.

Similarly the use of MR has become increasingly prevalent thanks to the ease of operation and detail of both the Microsoft HoloLens and the Apple ARKit and the entrance into the AEC industry of talented developers from the gaming industry producing sophisticated programmes that can be integrated into the team’s workflow.


4. Optimising the Workflow

Our structural engineering team is using these immersive technologies as a tool that fits into a broader agenda of interoperability, intelligent data sharing and parametric design as part of an evolving interdisciplinary BIM workflow as shown in Figure 3.

Workflow diagram showing direct link to immersive technologies from existing BIM workflows
Figure 3 – Workflow diagram showing direct link to immersive technologies from existing BIM workflows

Real-time rendering packages, used more commonly in gaming, can produce extremely high quality and explorable renders from 3D models with only a limited need for post-processing. However the highly polished photo-real rendering for a fully immersive experience (Figure 1) can still take several days or even weeks to perfect for large spaces and buildings. This makes it excellent for high quality presentations, but more cumbersome for exchanging ideas and discussing ‘real time’ issues with the project team. Simplified and ‘raw’ geometry from modelling software such as Robert McNeel’s Rhinoceros 3D (Rhino) or Autodesk Revit is often adequate for this purpose. This offers an excellent alternative to 3D printing during option studies, reducing the time spent producing physical models and allowing the model to be shared instantaneously. We have been exploring ways of setting up our models with both outputs in mind, for design review and for downstream visualisation.

Despite the availability of increasingly powerful hardware and graphics cards, the content and resolution of models continues to push the limits. For some of the largest models and highest quality imagery we require specialist machines with state of the art graphics processors, again more akin to high-end gaming and games creation than traditional modelling and visualisation.

Significant effort can still be needed to ensure the frame rates and headset orientation do not cause juddering or motion sickness (especially prevalent in the fully immersive VR environments), so this has been an important part of the software development to limit the amount of post-processing required before importing the model into the headsets. The hardware is producing two 4K images at double the normal frame rate, above 60 frames per second (FPS) per eye to avoid motion drag, so equivalent to 120 FPS.

Optimising the model geometry assets, for example by reducing the number of vertices and surfaces, is especially critical when using portable devices such as the HoloLens as, by their nature, they contain less computing power than larger desktop hardware used for VR. The current limitations are still largely governed by battery technology and heat generation.

Different software produces meshed geometry in a variety of ways using different algorithms. Autodesk Revit is now commonly used for project documentation, but its method of defining meshes can result in highly visible joint lines and unresolved meshing at junctions, as well as very large and inefficient file sizes. One way to avoid this problem has been using the data-hub concept [3], creating a container based in Rhino and Grasshopper visual scripting. This contains all of the geometrical data and other structural design parameters, which can be moved in multiple directions between analysis and documentation software via a number of bespoke plug-ins.  

This same data-hub has been used to quickly re-build geometry for the purpose of importing it into the VR and AR engines. In some cases the geometry is quickly rebuilt by replacing Revit components (families or worksets) with simplified geometry (for example in the case of circular columns). In other cases the scripts are used to rationalise meshes and reduce the polygon count whilst maintaining the basic original geometry and quickly creating a very close approximation of the target geometry, which is usually more than adequate for communicating the design in the VR and AR hardware. The datahub also contains a layering system with different textures and colours for the various materials which are mapped across to the rendered model. At present, using Rhino to develop the original source geometry with these goals in mind significantly reduces the amount of post-processing required.

The main file format used for transferring data is .fbx, which is native to Autodesk Motionbuilder. Each file contains data broken down into meshes, faces, vertices, normals, UV coordinates and shaders (defining colour and texture). There are UV channels for light mapping and texture on each surface.

We have used a further optimisation process called static batching, which is typically carried out in Grasshopper or in the immersive technology interface software. This is the process of grouping and compressing the number of meshes and shaders that are sent between the CPU and the GPU of the computer to draw the images on the screen. By carrying out this batching process it reduces the number of data exchanges required and hence greatly speeds up the process.

Other developments have included performance optimisation by selectively ‘hiding’ and buffering objects that are hidden from view or simplifying complex geometry that is too distant to perceive in detail. These methods, including frustum culling and occlusion culling, generally require the meshes to be separated into more objects to allow a systematic culling of invisible objects. This can be in conflict with the concept of static batching, thus requiring a balance of approaches through experience and through trial and error. It is also evident that there is a conflict between the current tendency to load more and more data into our models as part of evolving BIM standards and protocols, versus the need to minimise the data being transferred to optimise graphics performance.

As with all new technology, it is developing extremely quickly and no one piece of hardware or software is able to carry out all functions. We have therefore been continuously trialling the available software and developing a series of plug-ins to improve the connectivity between the various platforms. The plug-ins are additional software programmes that are developed outside of the core software packages, which allow interoperability between the packages by extracting and manipulating the relevant data in common formats and nomenclature.

5. Drawing on Gaming Industry Experience

A more recent development which has had a significant impact upon our immersive technology workflows has been the introduction to the AEC industry of Umbra. Their move to introduce cloudbased MR processing has altered our workflows and expanded the boundaries of the technology. As noted in Figure 3, Umbra has created a process that allows building models (from Revit/ Archicad/ Navisworks, Point Cloud Data) to be optimised in the cloud for MR viewing. They have used their years of experience in the gaming industry to produce an optimisation process (“Umbrafying”) whereby a reconstruction of the input 3D model is generated in and streamed from the cloud with an adaptive level of detail. The level of detail of a model will increase or decrease depending on the user view and thus it distributes the hardware resources required to produce and stream the model accordingly [4].

This has allowed us to divert our own resources away from model optimisation, thus speeding up the process of MR model creation so we can focus our efforts on communicating and reviewing our engineering designs. Figure 4 demonstrates an MR model of the complete Al Wakrah stadium structure (also seen in VR in Figure 2) that was available for streaming approximately 30 minutes after activation of the Revit Umbra add-in. 

Figure 4 –Umbra model of Al Wakrah Stadium Structure Produced in Autodesk Revit, streamed to Apple iPhone ARKit. 

Umbra has also focused on producing a cloud streaming service that will be adaptable for multiple platforms. We have successfully used Umbra on the ARKit and Microsoft HoloLens by Umbrafying our structural models within Revit. It can also be accessed via WebGl and it is possible to optimise Unity assets and deploy them into the Unity application, offering a degree of open interoperability. Whilst the resolution and immersion for this solution (Figure 4) is not as high as that produced for the Oculus or HTC Vive (Figure 1 and 2), it takes a fraction of the time and resource to produce and as a means of communicating an evolving design or engaging in the review of a construction detail prior to fabrication it is a very valuable tool.


6. Limitations – Practical and Technological

Whilst immersive technology is adding a new dimension to the way the AEC industry models, communicates and reviews designs, the fully immersive, multi-user collaborative utopia remains elusive. There are limitations to what can currently be achieved, but also there are some areas ripe for further development, for example as the increasingly complex geometries afforded by additive construction becomes more common place.

Sketching in 3D

One of the common criticisms of 3D modelling is that it is very clinical. It is difficult to sketch and portray simple, creative concepts with digital models which, by definition, require a large degree of mathematical definition and can be time consuming to construct. Whilst the latest generation of VR headsets adopt wall-mounted laser tracking systems to further improve the stability, accuracy and smoothness of the display, and use voice commands and a pair of hand-held ‘wands’ as a user interface, they still lack the precision and intuitiveness of 2D and 3D modelling with pen, mouse and keyboard. It is possible to use software applications such as ‘Tilt-brush’ to sketch and create images by hand in 3D, however in its current form its benefits are limited because the ability to truly collaborate in a virtual environment is limited.

Collaborating in a Virtual Environment

Communication between multiple users is typically via a combination of head-sets with laser-sighted highlighting, basic avatars and remote viewers through a WebGL on traditional monitors. Manipulating the model in a live environment can be simulated for simple prescribed and rehearsed tasks, but in general carrying out the updates in a 2D environment and then uploading the model again is much more efficient due to limitations of the user interface and processing speeds. So for a design review the individuals essentially ‘mark-up’ the model and exchange screenshots. Whilst this is not yet comparable with the collaborative review tools such as Autodesk’s BIM360 and Nemetscheks BIM+ it is only a matter of time before interoperability with these platforms is created which will unlock greater application for the immersive environment. At that point the challenges of being able to ‘tag’ a virtual model using AR and voice commands, as well as being able to manipulate components in a ‘live’ model using hand gestures, beyond simple pan and zoom functions, become more trivial.

Even with the current limitations there are benefits to be gained from shared immersive experiences, for example our use of the HoloLens to review the complex structures of the 2016 Serpentine Gallery Summer Houses and 2017 Serpentine Pavilion engineered by AECOM. The curved steel frame, contained inside a thin plywood stressed skin, of the Barkow Leibinger Summer House proved almost impossible to review and check on-screen. Multiple users in an AR model review, using the Trimble Connect interface (Figure 5) allowed the shop model to be reviewed at various scales, including 1:1 checks for accessing the bolted splices. The timber walls and base plates for the sinuous timber walls for the 2017 Pavilion were reviewed in a similar way, as have key connection details for the Hong Kong Airport Terminal 2 and Third Runway Concourse developments (Figure 6). We have also successfully deployed VR in the HTC Vive for numerous successful design reviews, including arranging the complex timber and steel lattice of the 2017 Serpentine Pavilion with architect Francis Kéré getting the benefit of a human-scale perspective (Figure 1).


Figure 6: HoloLens Design Review of Airport roof cast column base
Figure 5: HoloLens Design Review of Barkow Leibinger Summer House

 

7. An Ever Changing Landscape of Advancement

In a similar manner to how we produced the design data-hub [3] with specific software in mind, but with an open format such that it could be adapted to technological changes, in working with Umbra we believe that the philosophy of focusing on the optimisation of the data rather than on any specific piece of hardware is allowing us to capitalise on the benefits of the rapid technological advancements.

There are various levels of multi-user immersive experiences in development that we are seeking to improve. In descending order of complexity and hardware/software resource burden these include:

  • Remote viewing – where multiple users will be remotely viewing the camera on the immersive device to engage in the review
  • Fixed Model Anchoring – where the virtual model is fixed to a specific real world location with each user interacting within the same space.
  • Presentation viewing – multiple users are able to view the same model but only the designated presenter can control and interact with the model.
  • External AR model review, placing the 1:1 model on site using the fixed modal anchoring principle. Current limitations include accuracy of geolocation and image brightness, meaning our successful trials have only been from fixed points on site at dusk.
  • Remote Multi User review – multiple users interacting with a model from remote locations which builds upon the principles of presentation viewing and, with avatars, Fixed Model Anchoring.

Although we have had successful trials of all of the above scenarios, the realisation of these developments is dependent on the hardware developments and software demand. Very soon they have the potential to expand the use of Immersive Technology in AEC beyond its existing communication benefits into an integral tool for review, construction and manufacture.


8. Conclusion

As the level of functionality develops, the potential for this technology is clearly huge. But, as these examples have illustrated, it has already moved beyond a novelty and marketing talking point to something that is revolutionising the way we communicate.

As with all of these tools – from parametric design, to software interoperability and ‘big data’ BIM – they are still a means to an end. Ultimately they are making us as engineers and designers more efficient at communicating and at producing our deliverables. However the future seems clear to us. Embracing these new technologies is bringing our industry forward in leaps and bounds, not only in how we design and communicate, but in how those designs are built, operated, maintained and experienced by the user.

 

References

[1] Dan Harries (2002). The New Media Book. British Film Institute

[2] "Facebook to buy Oculus virtual reality firm for $2B". Associated Press. March 25, 2014. Retrieved March 27, 2014

[3] LEACH, Jon, NICOLIN, Rossella and BURTON, Matt; Structural Workflows and the Art of Parametric Design. The Structural Engineer journal March 2016

[4] BUSHNAIF, Jasin; The Umbra Workflow, https://blog.umbra3d.com/blog/the-umbra-workflow? March 13, 2018

[5] LEACH, Jon, FOWLER, Martin; Immersive Technology – a transformation of the workplace? The Structural Engineer journal September 2016 Utilised Software: McNeel Rhino, Autodesk Revit, BIM360, Umbra, Nemetschek BIM+, Unity.

Fill in your contact details for a PDF download