## Recursive algorithms

### September 14, 2013

A recursive algorithm calls itself with a subset of the initial data until its base case is reached (e.g. data cannot be split further).

The canonical example is the factorial operation: $x! = x (x-1) (x-2) \cdots 2$.
A straightforward Python function that implements this idea follows:

def fact(x):
if x <= 1:
return 1
else:
return x*fact(x-1)


In practice, recursion actually can speed up the execution of divide-and-conquer (DNQ) algorithms (i.e. in which the initial problem is sequentially split in more manageable sub-problems), as discussed in the next section.

Analysis: the Master Theorem
An asymptotic estimate of the running time $T$ of a recursive DNQ algorithm with deterministically equal size subproblems is given by the following formula: $\displaystyle T(n) \leq a T(\frac{n}{b}) + O(n^d)$, where $n$ is the data size, $a \geq 1$ and $b > 1$ are the number of recursive calls per level (i.e. the branching factor of the recursion tree) and the associated reduction in input size, and $O(n^d)$ describes the computational effort associated with preprocessing (i.e. all that is done before the recursions begin).
One can then identify three cases (for simplicity, $n$ is assumed to be a power of $b$):

• If $a = b^d$ then $T(n) = O(n^d \log n)$
• If $a < b^d$ then $T(n) = O(n^d)$
• If $a > b^d$ then $T(n) = O(n^{\log_b a})$