Introduction

It’s an exciting time for our industry. Whereas construction has typically lagged behind other industries in terms of technology, there is currently an amazing buzz all around computational design and BIM.

Dynamo is one of the most successful and accessible programs for AEC professionals – but what exactly can you do with Dynamo? How far can one go with the tool, beyond everyday work-related matters?

 

With this question in mind our team (made of architects with a big passion for everything computational) approached the first UK Dynamo User Group’s Dynamo + Generative Design Hackathon. We wanted to push the limits and do something completely out of the context – a proof of concept of scalability and flexibility of the tools we use and love. We can do more!

 

Our big idea was to attempt something unconventional – it had to present our group with a real technological challenge and it had to be hacky. After throwing around a few ideas, we settled on a concept: Mesh.ByFace

This would be a series of nodes that would access a user’s webcam, take a selfie, render a 3D .OBJ file of their faces and bring the coloured mesh into Dynamo. From there, users could insert their faces into Revit – perhaps even into a real building!

 

We wanted to have a lot of fun at this hackathon, as reflected in our group name – The Hackstreet Boys.

With a clear objective in mind (and a killer group name) we turned up fresh-faced on a Friday morning at the hackathon location: a WeWork in Paddington.

 

Webcam Node

The first step of our proposed Mesh.ByFace pipeline would be to create a live webcam node to capture a user’s face. This presented us with a few technical challenges: would the Dynamo user interface support a live video feed? How would this work within the graph execution model?

 

Undertaking some initial pre-hackathon tests, we were able to access a laptop’s webcam using the popular AForge.NET library. With just a few lines of C#, we were able to create a WPF window with a live webcam feed. So far so good!

 

The next challenge came during the first day of the hackathon. While our group has a reasonable level of  experience with IronPython and C#, none of us had created a ZeroTouch node before, let alone attempted the more advanced kind of UI node using Dynamo’s NodeModel interface.

 

After many hours using Dynamo’s excellent developer documentation (and some assistance from Alvaro Pickmans and the Dynamo team) we were able to place our live webcam node within Dynamo Sandbox. This was a really exciting moment – staring back at our own surprised faces… inside of Dynamo!

Dynamo Webcam Node

 

Point Placement GIF & Tunes

Creating an OBJ file from our user’s selfies would take a few minutes. What would the users do while waiting? We need to build up the anticipation. Our point placement GIF displays the user’s freshly-snapped selfie (with the facial-alignment points overlaid, flashing on and off) while playing some classic Backstreet Boys tunes! Waiting? Perhaps – but waiting in style!

ML Facial Recognition

 

Mesh.ByFace

We got some amazing selfies, but it’s not over, we needed to take it into Dynamo. Here is when we go deep inside the amazingly infinite GitHub looking for how to achieve this. After some research we identified our candidates:

First, we will use a “2D and 3D Face alignment library build using pytorch” provided by 1adrianb to extract from the picture 68 points and save it to an iBug .pts file.

Face Point Extraction

 

This .pts file will be fed to a “a lightweight 3D Morphable Face Model fitting library” from patrikhuber. Basically we will use the points to morph a neutral mesh and get it to look like our original picture.

Neutral Mesh

 

Good, we have our libraries, time to use them. Both repos are available with Python 3 bindings but no Ironpython support.

First we build our script using the available methods and combine the two into one streamlined code that gets the picture, generate points, morph the mesh and then write it to an OBJ file.

Thankfully some examples in GitHub helped us, but we had to figure out ourselves many missing bits.

Eventually we got the code working, find it here: GenerateMesh.py

We still have to make it work from within Dynamo and with no Ironpython support, we had to think differently.

The Python code was run in its own Process from within a C# ZeroTouch node. Zero Touch Node run Python 3

Alternatively, if you are not confident with C#, just run a .bat (.bat code) that launches the script from within a Dynamo Python node.That’s how we tested it at the beginning. Dynamo run Python 3

Facial Feature Recognition

 

Using Mesh Toolkit we get the OBJ inside of Dynamo.

Mesh ToolKit Surfaces and Face Points

 

The mesh looks good, but is missing colors. To add that extra bit of Uncanny Valley feeling, we proceed to colour it in Dynamo.

Here we used the previously generated 68 facial alignment points to map the picture and then rotate and scale the mesh to match it to where it falls in the image. Next step was extracting the pixel colors and matching them with the center point of the mesh triangles.

Image Map for Surface Colourization

 

Finally we get our results:

Results: The many faces of the Hackstreet Boys

 

Live Tweeting Node

Could we possibly complete our hack without tweeting about it? Of course not! The world had to see our faces!

Our live tweeting node uses the Python library Tweepy and the Twitter Developer application to get the necessary authorization credentials. Our Python script uses the keys and tokens provided by the app to tweet both images and a status onto the Hackstreet Boys Twitter page directly from Dynamo. Want to see some BIM selfies? Check this out: https://twitter.com/HackstreetB

 

Conclusion / Outro

After 1.5 days of mad coding, research and prototyping, Mesh.ByFace was demonstrated in front of a live audience at WeWork Paddington. Creating a fully-functional prototype came down to the eleventh hour, but our exceptional team managed to pull everything together. Our demo was a great success and we won both the prize for most fun team and placed third in the hackathon overall.

 

The project’s source code can be found on the Hackstreet Boys github page.

 

Team Bios

 

Ben Robinson is an architect and computational BIM designer at Hawkins\Brown. Ben works within Hawkins\Brown’s computational design team HB\Technologies where he looks to design new digital tools that utilize the Revit API, Dynamo, Python, with a strong focus on workflow automation, productivity, and GUI development

Twitter: @Robinson00Ben

 

Mauro Sabiu is currently a Senior Architect at Zaha Hadid Architects. Specialised in BIM and Computational Design, his role ranges from automation of design workflows and computational modelling, to Dynamo staff training as well as supervision of the BIM workflow for several international projects.

Twitter: @sabiu_mauro

 

Mikael Santrolli is BIM & Design Systems Coordinator at Foster and Partners. Proficient in BIM workflows and an expert modeller specialized in parametric software, computational design and automation of processes.

Twitter: @m_santrolli

 

Oliver is the Computational Design Specialist at Allford Hall Monaghan Morris. He is an expert in data management, process automation, BIM and the Revit API, having previously worked as an architect. He has an innate knowledge of architectural workflows, building construction and experience developing custom process automation tools using Dynamo, Python and C#.

Twitter: @Oliver_E_Green