Rapid Recursive Method

Home / Products & Services / Rapid Recursive Toolbox / Rapid Recursive Method

Breakthrough Methods for
Solving Decision Problems.

About Recursive Methods.

Recursive methods represent a true breakthrough in the way people analyze complex decisions. These methods natively capture more information about options, market conditions, and risks than traditional methods like DCF models. As a result of both capturing more information, and using it in a better fashion, recursive models provide a demonstrably superior approach to computer-assisted methods available today, which are designed around ease of computation.

Our patent-pending Rapid Recursive® methodology mimics the way humans think about complex decisions, also known as sequential decision problems. Rather than trying to solve one massive problem all at once, our method focuses on solving manageable subproblems, which can then be aggregated to provide a solution to the larger problem at hand.

Within this page you will find a more technical definition and explanation of how the Rapid Recursive® method works and why it is important in the realm of decision making. We understand that much of this information is more technical than our typical user needs to know, but we share it to ensure the validity and credibility of our breakthrough method is clear. We encourage you to ask questions!

A Simple Example.

The essence of a recursive model is to separate a group of repeated decision into a series of two-period decisions. This allows for the consideration of thousands of possible combinations of situations and actions in a more compact manner compared to other methods.

Recall the last time you had to drive somewhere: you probably did not get in your car and simply drive due east to get from A to B, driving through your neighbor’s garage in the process. More likely, you got in the car, drove down the street to an intersection, made a decision how to proceed from that intersection to the next, and repeated this process until you arrived at your destination. This is a recursive approach to getting from A to B.

Theory.

The theory behind this originated with Richard Bellman’s 1957 book Dynamic Programming. Bellman argued that the solution to multi-period decisions should maximize, over all available actions, the following:

  1. The current period earnings or other rewards; and
  2. The discounted value expected in the next period, given the decision made in the current period.

Bellman’s solution not only describes the value of being in a particular situation (defined as the sum of 1 and 2), but also prescribes the best action to take in each situation in the model (the one that maximizes the value). This stems from Bellman’s “principle of optimality,” which can be rephrased as “choose the best course going forward, regardless of what decisions you made in the past, in a manner that maximizes the value to you.”

You can read more about the intellectual history of recursive methods, which includes contributions from numerous award-winning scholars, at our resources page.

Why “Recursive”?

Note that the discounted value expected in the next period is a direct input to the calculation of value in this model. This is an example of what mathematicians refer to as a “recursive definition,” which involves defining an object in terms of itself. Other common recursive definitions include the Fibonacci sequence ( F(n)=F(n-1)+F(n-2) ) and the factorial operator ( n!=n*(n-1)! ).

Due to its recursive definition of value, and the fact that the solution techniques for these models involve recursive calculations, the method described here is often called a “recursive method.” These methods are also sometimes referred to as the “value functional method,” “dynamic programming,” or “stochastic control.”

Key Inputs for Recursive Methods.

Using recursive methods to solve a complex decision problem requires the identification the following core elements, which will be specific to each problem:

 

  • States, which are the different situations in which a decision must be made. For example, for many operating firms, the revenue of the company, economic conditions in the industry and the existence of stable and experienced management are natural states. For a driver, an appropriate state would be their current location along a route.
  • Actions are the variables that are under control of the decision maker. The decision maker makes a decision or takes an action based on the conditions of the current state. For example, for many operating firms, choosing different levels of investment are natural actions; for drivers, choosing if and where to turn are natural actions.
  • Transition probabilities are the probabilities of moving to a future state, given the current state and the action taken. For example, investing more in the current state would likely increase the chances of a business moving to a state with higher revenue in the next period, or turning left will lead you down a different road than driving straight.
  • Rewards are the immediate earnings or benefits received in the current state by the decision maker upon taking an action. For an operating business, net income is a natural reward. For drivers, transit time or fuel costs make for natural rewards.

Using these building blocks, the methods recognize your ability to change course as conditions change, natively handle asymmetric risk, consider hundreds of possible scenarios, and identify value maximizing solutions to your decision problems. This is accomplished through the intuitive interaction of the four elements listed above: choosing an action in a state generates a reward and influences the state at the next decision period through a transition probability.

The Value Functional.

Combining the elements described above gives rise to a value functional equation, where the value in the current state of affairs is the maximization of the sum described earlier. This equation is, in mathematical terms, a functional, rather than a simple function. A functional is a “function of functions,” which was described by Dreyfus (1963) as a function that takes curves as its arguments.

The value functional equation is sometimes called a Bellman equation. In various forms, such methods may also be called dynamic programming, stochastic control, impulse control, the value functional method, or a Markov Decision Process.

The concepts and variables in a value functional equation, and then the value functional (“Bellman”) equation itself, are shown below.

Functional Equation Variables

\begin{align*}
V(s,t) &= \mbox{value at time } t \mbox{ given state } s \\
f(s,x) &= \mbox{reward function given state and action} \\
g(s,x) &= \mbox{transition function} \\
t &= 0,\ldots\,T \\
s_t &= s_0,\ldots,s_T \\
x_t &= x_0,\ldots,x_T \\
\beta &= \mbox{discount factor} \\
\epsilon &= \mbox{random error term}
\end{align*}

 

Value Functional (Bellman) Equation

\begin{align*}
V(s_t) &= \max_{x \in \Gamma} \left\{f(s,x)+\beta E[V(s_{t+1})]\right\} \\
\mbox{where:} \\
s &= \mbox{state} \\
x &= \mbox{action} \\
\Gamma &= \mbox{feasible set of actions}
\end{align*}

Solution Algorithms.

There are two main solution algorithms for the value functional equation: value function iteration and policy iteration. We describe each conceptually below.

Value Function Iteration

Value function iteration begins with an estimate of the value functional, which is used in place of \beta E[V(s_{t+1})] and combined with f(s,x) to update the estimate of the value functional. This new estimate is then used in the same manner to obtain the next iteration of the value functional. This process continues until the distance between successive iterations of the value functional collapses below a predetermined level.

Policy Iteration

In a similar manner, policy iteration begins with an assumption for the optimal action in every state. Using this assumption, the algorithm calculates the value functional. This value functional is then used in place of \beta E[V(s_{t+1})] to find the optimal policy for each state. The algorithm continues until successive iterations return the same policy recommendations for every state.