## Coffee, graphs and musings on scientific education

### October 3, 2013

*Good morning!*

Today I kick off with some more Algorithms basic theory. Tim Roughgarden, the instructor, has a precise yet captivating delivery style (not too informal, not too nerdy), which makes this worthwhile.

In general, I noticed the same pattern during all of my studies: the instructor has a large responsibility in keeping the students’ heads up, in both senses.

The presentation of even the most formal of subjects has to be enriched by “refreshing” elements. For example, my Calculus 1, 2, 3 AND 4 teacher, the never-enough-celebrated G. C. Barozzi, had a way of embedding anecdotes and biographical facts about mathematicians. This both put the subject in historical context, making it feel less abstract, and relieved the audience from continued focus. Seems too nerdy still? You’ve found the wrong blog then! Ha.

I can agree that some subjects might feel a bit “dogmatic”, as formal proof of all their aspects would require a disproportionate effort from large part of the audience. In this case, it is up to the lecturer to select the more illuminating material beforehand : *Longum iter est per praecepta, breve et efficax per exempla* (Seneca)

So, contraction algorithms for the MINCUT problem for graphs: given a graph , with nodes V and edges E, which is the smallest set of edges that divides G in two otherwise disjoint subsets of nodes and (i.e. the smallest *cutset*)?

**Random contraction (Karger’s algorithm)**

Sequentially remove edges uniformly at random, merging the resulting nodes (“edge contraction”) and rerouting the edges initially connected to and to .

The cutset that remains when there are only two nodes left is a local optimum of MINCUT.

A single run of Karger’s algorithm *(adapted from Wikipedia)*

If is the edge set size, there are possible contraction sequences. Only a subset of these yield the true MINCUT (i.e. the problem is path-dependent).

## Recursive algorithms

### September 14, 2013

A *recursive* algorithm calls itself with a subset of the initial data until its base case is reached (e.g. data cannot be split further).

The canonical example is the factorial operation: .

A straightforward Python function that implements this idea follows:

def fact(x): if x <= 1: return 1 else: return x*fact(x-1)

In practice, recursion actually can speed up the execution of divide-and-conquer (DNQ) algorithms (i.e. in which the initial problem is sequentially split in more manageable sub-problems), as discussed in the next section.

**Analysis: the Master Theorem**

An asymptotic estimate of the running time of a recursive DNQ algorithm with *deterministically equal size subproblems* is given by the following formula: , where is the data size, and are the number of recursive calls per level (i.e. the branching factor of the recursion tree) and the associated reduction in input size, and describes the computational effort associated with preprocessing (i.e. all that is done before the recursions begin).

One can then identify three cases (for simplicity, is assumed to be a power of ):

- If then
- If then
- If then