See part 3 here

See part 5 here

One workflow that is a bit tricky to get started with in Dynamo is iterating on the output of a function with the same function.  We could stack a chain of these nodes together ( I’m using the term function and node interchangeably here) feeding the output from one node into the input of another.  That would work, but what if we don’t know how many times we need to call the function and pass it’s output?

It’s not actually difficult, but it takes a change of perspective, if you haven’t heard of recursion before. A recursive function is one that calls itself. Sometimes,  another option is to use the reduce node(which applies a function over the items in a list as well on the output of that function), but recursion is more general purpose.

Recursion, allows us to embed a node within itself.  This means that during the execution of that node, it will call itself again, and again, and again…

This will go on infinitely, until we run out of memory, or until we reach our base case!  The base case is the point at which the recursion stops, and in Dynamo, it is when we start returning all the results from each recursive call to our recursive function.

A simple example of a recursive node is a sumList node: 

This is a recursive custom node that contains a call to itself.  We recursively split the list of input numbers into smaller and smaller pieces, each time calling sum list again on that smaller input list, then finally once we reach our base case of list length < 1 we return a 0 and then all other previous results from the split list that was called in each successive sum list call, these pieces of the list are pushed through the to the add node which is called once for each recursive call that was made previously. 

It’s a kind of mind stretch, but each time we recurse through the node, for each item in the list we get deeper and deeper and never actually get to the add node, because the IF test evaluates to true for the general case we keep calling sum list and add never gets called.  Only when we finally reach the base case does the recursion stop and and we starting adding results.

A slightly more complex and interesting example is using recursion for a nondeterministic algorithm, we combine recursion with randomness to search for a solution to a problem.

One instance where recursion makes sense is when we want to feed the result of our function into the function as the next input.  The graph below uses the simple search technique of hill climbing to move a point, and attempt to get closer to a goal point.  This is a toy example and is being used to illustrate the parts of the algorithm and one way to accomplish them in dynamo.

A high level overview of hill climbing is as follows:

1) We have some goal state we which to reach ( in our case it’s getting an XYZ at 100,100,100)

2) Generate a random solution to this problem, we call this the parent solution. ( in our case we generate a random XYZ point)

3) We define a way to evaluate our current solution (we just use XYZ distance node)

4) Change the parent solution in some way to generate a new solution, call this the child solution

5) evaluate the parent and child:

    if the child is better than the parent,

                call the child the new parent

    else

                throw away the child solution

6) go back to step 4 and repeat, this is where the recursion comes in.

If you notice below the produce child node needs two inputs:  the goal point, which never changes, and the parent solution, which changes only when it improves.  We need to call this node over and over with a changing parent input.  This will be the node that triggers the recursion.

Inside the produce child node you can see we generate a new child by creating a random vector and moving the parent solution by this vector to get a new point.  We then evaluate the fitness of the two solutions by taking their distances to the goal point.  

These distances and the two points are input into another node, decide new parent which is simply an if statement, whichever solution has a smaller distance, that one is the output.

Then we make the recursive call to the produce child node, where the code before it runs again.

Also notice the if statement that tests the parent fitness.  This if statement tests for the base case.  If the distance to the goal is less than .1, then we end the recursion by passing a list with 0 in it ( we could pass an empty list, notice that the true statement on the If statement does not lead back to the produce child node).  If the distance is not less than .1, then we make a further recursive call to produce child.

Only when the recursion finally ends, will we start returning the values output by recursive calls, the first one will be the list with 0 in it, then the add to list node after the produce child node will be called repeatedly and add the parent node at each step.  Even though we never explicitly store our XYZs in a list we can use the add to list node because when the recession ends we pass a list as the output of our produce child node.  The output inside this custom node graph is the same as the output of the produce child node that we can see in this graph… 


So when the recursion ends, we pass a list out as the first returned value(we can’t return anything previously because another produce child node keeps getting called deeper and deeper before reaching the output node) then for each recursive call there is a call to add to list that adds the xyz parent at each step to that list!

When the node returns in the mode graph we’ll get a list of each step of the recursion.  This will show repeated entries where the child solution was worse.