See part 4 here
Now that we have seen the necessary setup to perform and parse a single render using the cloud rendering service in the last blog posts, we can start queuing multiple renderings of a Revit model as we alter parameters on it.
This graph and the custom nodes necessary to make this graph run will be hosted in the samples and package manager, but watch out, even with the custom nodes you’ll need to get a weather file to parse (or enter the values manually). The sample will include a weather data file for parsing, but is for a specific site and it’s necessary to get one from Green Building Studio or another source for your specific site.
The first thing to go over is a high level overview of what we want to accomplish. We need some way of altering some number of parameters on some family instances and then taking a render. Then we need to do something with the data we get back from the render, for this proof of concept, we’ll save an image of the render, and then display this daylighting information back onto the Revit model using the analysis visualization framework. We’ll use a custom node for that from post 3. Then we need to be able to do this series of 4 operations over and over. Change parameters, render, save image, display data.
If you look at the main graph below you can see there are 4 custom nodes (functions) linked together with a series of compose nodes. These four nodes to the majority of work in this graph. I’ll go over their behavior in depth, but the important thing to understand now is to think about the different operations we need to perform. Without understanding how these nodes operate in detail, we can see the high level overview.
The UpdateParameter(s) node takes a sequence of values, let’s not worry what they are yet or how that data is structured, simply it’s some families and some parameters on those families to change. The compose simply take the output of one function and pass it to another like this. f(x) = print(update parameter(x)). If we called the function f, it now calls 2 other functions. First it calls update parameter with x passed in, then the output of update parameter is passed to print(). what comes out of the combine node is another function, or the combination of both custom nodes into one new node.
So the output of updateParameter(s) is passed into the input of RenderWithWeatherData_Return List of data. In this case we just pass the iteration number so we can keep track of how many renderings we have done. The next combine and custom node act exactly the same, the output from the render function is passed to the input of GetDataFromRender_WriteImage which should make sense.
The point of these operations is so we can map these 4 functions over one sequence which is input into the first node, UpdateParameters. Because we can continue to compose more functions onto the chain it’s easy to restructure our graph and add more functionality without changing the other functions. For that to work you need to plan ahead a bit, I’ll describe the idea in greater detail when looking into the functionality of these nodes, but we’ll have to keep passing all the data about our rendering down the chain because a function further down the chain might need some information to execute correctly.
Before diving into the functionality of these custom nodes that actually do the updating, rendering, image saving, and data visualization, let’s take a look at the sequence of data we are using to drive the renderings.
Below you can see we use the cartesian product and lace longest combinators to create list pairs of a string parameter name, and a value. You can follow the logic of this list building in the watch nodes. The way we build these lists will control what parameters get set during each render iteration. The goal for the final list should look like this:
[family instance, [ [width,10],[height,10].. other params,values as we want to change] iterationNumber ]
it also looks like even more generally:
[familyinstance, [list of param value pairs]iteration number]
we have one of these sublists per render, each render updates one family instance, changes some number of parameters, and it knows which render it is out of the list.
The output of the above graph is a list of sublists, each sublist is a render setting, it contains the pairs of param data [name,value]. See the last watch node on the right. You can think of this as the settings we are going to apply to each render. We have 12 family instances in our model so we’ll end up with 24 renderings, one for each of the two settings for each of the 12 instances.
The next step is to combine each sublist (render list) with the family instances. We use the Cartesian Product and list combinator again. This produces the almost final list. The last thing left to do is to add the iteration number to the end of each render list. We do this with some quick python in the Add Index Number Enumerate node.
We iterate each list, grab the index of that iteration and add it to the end of the list. If anyone is wondering we use .Add() and not .append() here because this is really a .NET collection, not a python list.
All of this list management and lacing work can be simplified in a few ways. We could create some custom nodes that make generating these lists of renderings simpler by taking some ranges of values, and number of increments and generating the lists for us in a cleaner way. This is something I’ll explore in a future post.
Another option is to use something like a python dictionary or a dictionary object, you can checkout something like the named collections package which will let you store data within a list of lists, but instead of accessing it by index, you use a key value, which can be a string, like “params” or “height”. It’s just another tool for organizing large sets of data like this and keeping track of it.
For now, we’ll stick with the list structure we have.
Now we have a complete sequence of data to call our rendering custom nodes on.
The UpdateParameter(s) is the first node to get called when we map, it’s the inner function. Since we are mapping over the sequence of sublists that are inside the list we just built above each time this node is called we are only dealing with one of those sublists. So the input looks like
[familyinstance, [list of param value pairs]iteration number]
The same as before, but note that because we are mapping each item in the sequence to this function our input is this sublist only, not the entire list of renderings. Each time map will call the node again and move on to the next sublist.
The first thing we do is get the first of list, and last of list, thats the family instance to update, and the iteration number respectively. The next thing to do is to get the pairs of parameters.
The code in the python node is a bit superfluous but it can be used to deal with less structured input data.
If the pairs of [param name, value] were not stored in a list we would need some way of identifying them in the list coming in. For instance if our input list was: [familyinstance, [param1,value1], [param2,value2], iteration number] , with no nesting, then how would we know how many pairs of parameters to look for, we can’t know which indices to get out of the list to grab them all.
We can use the has attribute () function in python to check if each item in the list is iterable. This means it supports iteration, it’s one way to check if something is list-like. We then keep track of the iterable items and store them in a list.
This is superfluous for us because our pairs are already in an outer list, so we just flatten off the list by 1 nesting level afterwards with the Flatten node.
The next step is to map last of list and first of list over our list of pairs, we are unzipping the pairs into two separate lists. [height,width,] and [10,20] for example. Then we use Lace Longest with Set Family Instance parameter as the combinator, our param list goes into list 1 and the values into list 2. This matches the order of arguments that the Family Instance Parameter node wants. Lace longest will map those arguments to multiple calls of the combinator, so we end up calling set family instance as many times as we have argument pairs.
We also wrap the set family instance parameter call in a transaction so this is sure to happen before the rendering takes place. We finally exit this custom node with an output just passing the iteration number through to an output.
The next section of the graph to execute is the Render with weather data_Return list of data node, catchy name, I know.
First, let’s jump into the Parse Weather Data, note that we need to supply path to a weather data file with a specific format, the one that I’m parsing here is a .CSV weather file from green building studio. Check this link
and scroll down to Getting DNI & DHI settings from GBS Weather Files. This will show you how to use Revit and Green Building Studio to get access to a weather file, download this file as a CSV, and specify the path to it inside the Render node.
I’m using a some python to find the current dynamo graph we’re executing and then to find the sample weather data file relative to it, this gets passed in as a file path to the custom node, so it’s the same each time we render.
We also input a dateTimeObject, which specifies a date and time of day.
Inside the Parse Weather Data node we use some python to quickly parse the file.
This python code creates a dictionary for each row of the csv, then we format the date and time object so we can look up the data we need for year, month, hour. Since we now also have a timezone offset field we need to reverse the offset from UTC back to our local time to lookup the appropriate time data in the weather file.
We can’t use the automatic csv parsing directly from python’s CSV library because it expects one header row, but this file has 2 header rows.
In the rest of the node, we just grab the data from the different indexes and output, passing them to the next part of the render function we’re in. Create SkyModel and Environment.
We pass in the data we just looked up in the weather file, and we also need to pass in a latitude, longitude, and the date and time object that we used before, but it will be used to set the render to this date. This node outputs a daylighting environment.
The Next node is Create Cloud Daylight Job and Send, we take a 3d view name, the resolution of sampling positions, and 3 points defining a plane, and the environment object we just generated in the last node. We send the cloud render job in this node as well, and return the result.
We return back into the render function and parse the data from the returned render file, wrap this data into a list, and return it from the render function.
this list is passed to the next function in the composed function chain : Get Data and Write Render, we grab the bitmap from the list, and the iteration number and use that as part of the file name for the image we’re going to write out, we combine this with a path, this can be wherever you want but you Need to make sure it already exists or it will return an error when trying to save the image.
Like before, when looking for the weather data file, I use some python to find the current dynamo graph home workspace file path, and use this path to save our images to, since we know this path has to exist!
This is passed in as a the file path input below, it should be the same for each run, so all the images will be saved inside this folder.
Writing the image to disk requires a transaction.
we’ll get a series of 24 renderings with the parameter changes we set. Note that I modified a window family in the .Rvt filed included with the sample to use a window where width and height were instance values. If you want to change type parameters you’ll need to make sure your logic is correct in the update parameter(s) node.
The final function that gets called is displayDataCall which internally uses another custom node we made in post 3, Display Daylighting Data on Surface to draw the daylighting information on the floor face of the sample room. Check out post 3 if you want to see how that node functions internally, but note that I moved the transaction into this outer custom node so that it could be mapped correctly.
Each time the render occurs the Revit model, will be updated before the render when a window will change size, then after as well, when the new daylighting data is displayed on the floor.
Note that the images you save out and the display data will appear different in intensity because they are clamped/normalized differently, the analysis style in the Revit file is set to use Min/Max, while you can clamp the max values of the saved image so they can be compared more easily.
And that is the last node executed, before the map function will grab the next set of values and call these composed functions again.