Dear blog, I’ve got the hots for Haskell.
Its terse syntax is intimidating at first, but as you keep exploring, you start seeing its beauty and even possible ways to write a Hello World program.
But let’s start from the easy bits: this language makes it trivial to compose functions, and in a sense specialize the bare language to represent more closely the semantics of the computational problem at hand.

One of the most basic higher order functions is map, which takes a unary function f and a list l and returns a new list, made of mapping f onto every element of l:

map         :: (a -> b) -> [a] -> [b]
map f []     = []
map f (x:xs) = f x : map f xs

Silly examples!

> map (+1) [1,2,3]
> map (\c -> if c=='o' then 'i' else c ) "potatoes"

Similarly, the right ‘fold’ operation (called ‘reduce’ in other languages) foldr takes a binary function f, a starting object v and a list (x:xs) , and maps f across the elements of (x:xs) recursively:

foldr :: (t -> t1 -> t1) -> t1 -> [t] -> t1
foldr f v [] = v
foldr f v (x:xs) = f x (foldr f v xs)

Binary functions? We’re not restricted to arithmetic operations! If we consider f in the example above as the functional composition operator (.), and the starting element v being the identity function id, the Haskell typechecker will infer that foldr (.) id just requires a list of a -> a functions (i.e. that share the same type signature as id, in this case) and an element to start with!
So here’s compose :

compose :: [a -> a] -> a -> a
compose = foldr (.) id

This idea of representing n-ary functions as daisy-chains of unary elements is called “currying“, and is crucial for reasoning about programs, as we can see in the above example.
Functional composition is implemented as follows, where “\u -> .. body .. ” is the idiom for anonymous (lambda) functions of a variable u:

(.) :: (b -> c) -> (a -> b) -> a -> c
(.) f g = \x -> f (g x)

Another thing of beauty is unfold, which serves to build data structures up from a prototype element.
In this case, it’s specialized to the List monad, so the neutral element is the empty list [] and the structure constructor is the list concatenation operation (:).

unfold :: (a -> Bool) -> (a -> b) -> (a -> a) -> a -> [b]
unfold p h t x
  | p x       = []
  | otherwise = h x : unfold p h t (t x)

See the type signature? We need a function from a domain to Booleans (a “decision” function p), two functions h and t that are applied to the head and to the recursion tail, respectively.

Unfold can be generalized to arbitrary recursive structures, such as trees, heaps etc.

As a quick example, a function that takes a positive integer and returns its digits as a list can be very compactly formulated in terms of unfold (however, leading zeroes are discarded):

toDigits :: Integer -> [Integer]
toDigits = reverse . unfold (==0) (`mod` 10) (`div` 10)
> toDigits 29387456

Next up: Monoids! Functors! Monads! “Hello World” ! and other abstract nonsense.

  1. Computer program power consumption
    A programming language that “minimizes” power consumption through minimal interconnect usage (e.g. memory calls).
  2. Food sourcing power consumption
    Farmland supply to cities: how to optimize land usage? What part of the produce can be made local e.g. made at the consumer or turned to hydroponic and its culture brought within the city itself?

Both these problems require a grammar of solutions, rather than a single instance, due to the diversity of the operating/boundary conditions that are encountered.
As such, I don’t think that a “proof of correctness” for either can be hoped for, but perhaps a number of heuristic checks might prove the point.
The former is addressed by a single technology, whereas the second requires a diverse array of strategies.

General considerations

  • Area and land usage
    Arbitrary rearrangement of the resources is not trivial: CPUs are designed with CAD tools that favor periodicity and reuse, and farmland restricts supply due to physiological productivity/rest cycles.
  • Time and flow
    Time plays a part as well: the edges in these supply nets do not handle a constant flow. In the first case, storage is regulated by registers, queues and stacks, whereas in the second, the flowing entities are subject to seasonal variation, degrade with time etc.

This framework is intentionally generic in order to highlight similarities, and it is of course a work in progress.
Both these problems in fact have broad political implications, which leaves plenty of space for many juicy discussions. Looking forward.


  1. An article from the NYT: A Balance Between the Factory and the Local Farm (Feb. 2010) highlights both the high costs of local (i.e. small-scale) green production, citing The 64$ Tomato, and the related climatic issues (e.g. cultivation on terrain located in the snow belt).
    The article closes with “Localism is difficult to scale up enough to feed a whole country in any season. But on the other extreme are the mammoth food factories in the United States. Here, frequent E. coli and salmonella bacteria outbreaks […] may be a case of a manufacturing system that has grown too fast or too large to be managed well.
    Somewhere, there is a happy medium.” — an optimum, if you will.

Side questions

  • Why do large-scale economics “work better”? i.e. have a larger monetary efficiency, which drives down the prices for the end user? More effective supply chain, waste minimization, minimization of downtime …

Extensions and interfaces

October 16, 2013

I would like to gather here data and interpretation regarding artificial extensions to human capability (the broadest definition of “technology”): We are witnessing transition from “technology-as-screwdriver” to “technology-as-cognition-extension”? More precisely, exactly how advanced must a technology be, until one cannot realize anymore to be using it?
This abstracts one step beyond A.C.Clarke’s “Third Law”: technology and magic will, at that point, be reduced to commonplace human experience, and therefore become indistinguishable from it.
It’s a rather bold statement, and I’m no starry-eyed singularitarian. Let’s start with a simple analysis by restricting to present-day tangible R&D results, and leave end-of-history predictions to fortune tellers.

Large scale: Behavioral trait clustering
October 29, 2012 : “We show that easily accessible digital records of behavior, Facebook Likes, can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender. The analysis presented is based on a dataset of over 58,000 volunteers who provided their Facebook Likes, detailed demographic profiles, and the results of several psychometric tests. […]”

Personal scale: Distributed training for machine learning
October 11, 2013 : “Qualcomm also envisions alternatives to app stores, which he called “experience stores,” allowing users to download expertise into their consumer products”. Have a look at the original EETimes article.
While neural networks aren’t exactly news, the idea of “sharing” training across devices seems intriguing. I wonder whether this concept is of broader applicability.

Personal scale: Human-computer interfaces
This is where human-machine interaction, on a personal (affordable, household) scale started: computer mice, hypertext introduced in 1968 during Douglas Engelbart’s “mother of all demos” (official title “A Research Center for Augmenting Human Intellect”):

… and this is (a small selection of) where we are now:

Technical Illusions CastAR, an Augmented Reality “platform” composed of glasses with integrated projector, wand/joystick, backreflective mat (AR) or VR glass add-ons.
Video here
Still raking funds from the Kickstarter community, but apparently it’s going well. I’m a bit concerned about all that hardware one has to deploy, especially the “mat”. Apart from showers, only one application I can think of benefits from mats.

Thalmic Myo.
Video here
This one is an interesting concept: it’s an armband device that integrates accelerometer and bearing sensors with neural readout, so muscular twitching such as finger contraction can be correlated with movement of the limb as a whole, allowing for very expressive interaction. It has been available for pre-order for a few months now and will sell for 149 USD from the beginning of 2014, and I’m seriously considering getting one.

Leap Motion, and extensions thereof, e.g. the DexType “keyboard” software, see below.
Video here
The Leap Motion simply processes optical range information (possibly using “structured light” like the Microsoft Kinect), so a number of artifacts in the gesture recognition are to be “engineered against”. However, offering an open SDK was a winning move, there are tens of application and games in various stages of development being offered on the Leap store.

Possible implications
Adaptive communication: i.e. terminals that are aware of user patterns and sync accordingly, “sync” meaning information display based on remote context (e.g. remote user busy or focused on other). Attention economics brokerage.
Are we heading towards higher-order communication, i.e. in which one won’t communicate with a machine one character at a time but through symbols, sign language, ideograms?
Next level: J.Lanier’s “postsymbolic” communication in cuttlefish; the “body” of a user (intended in an extended sense, i.e. with hardware enhancements) becomes a signaling device in its own right (e.g. flashing, changing shape/”state”, radiating information etc.)

In fact, I think it’s only natural that machine interfaces are to be evolved in order to effectively disappear, the only question is when will this transition occur.

On plant growth and form

September 28, 2013

How quickly does the number of leaves on this plant increase, and why?

The photo above is of a Schlumbergera truncata (or “Christmas cactus”), which my mother (from whom I got the parent plant) calls “Woman’s tongue”. The latter might be seen as a rather sexist definition (pointy and forked..) but I guess it’s an old heritage, from when the world was younger..

Anyway, how fast do the leaves increase in number, and what drives the general form of the plant?
Say we have initially N branches, each that can generate one to three leaves (branching factor p), more or less at random.
If the branching factor were constant and identical for all branches, after one generation we would see 2N to 4N leaves; after two, 3N to 16N, or, for the j-th branch: L_j = \sum\limits_i{l_{i\,j}}.
The number of leaves l_i at each level i is seen to be \Theta(l_{i-1}) where the bound constants are \min(p) and \max(p).

In the real world, a branch has spatial extent, with each leaf occupying a volume, with no intersections or overlaps in \mathbb{R}^3. Moreover, the leaves seek and thrive in sunlight (phototropism), which conditions their orientation and size.
Therefore we could represent a plant as a vector field, with “streamlines” (nutrient direction) and “polarization” (orientation to sunlight).
Gravity, through the self-weight loading of the branches and capillary diffusion of nutrients, obviously plays a part.
We can sketch the processes at play in the following, informal manner. First some notation: for leaf k of level i, the leaf size s_{i\,k}, orientation \alpha_{i\,k}, sunlight exposure (intensity flux) \phi_{i\,k}, viability (function of exposure and distance from the root) v_{i,k}, repulsion potential (function of size and distance to nearest leaf neighbors) \rho_{i\,k}, branch node stiffness \sigma_{i\,k}, which increases as the leaf matures.
Moreover we can define the branch vitality v_b as the sum over all the parent leaves (viability upstream).
s \rightarrow \phi , \rho, \sigma (leaf size influences exposure, repulsion and stiffness)
\phi, v_b, \sigma \rightarrow s, v, \alpha (exposure, viability and stiffness influence leaf size, viability and leaf angle)
\rho \rightarrow \phi (repulsion influences exposure through position: model requires collision and ray-tracing, which require considerable computational effort)
\alpha_{i-1\,k} \rightarrow \alpha_{i\,k}
\alpha \rightarrow \phi, but also
\phi \rightarrow \alpha
As each leaf matures, it becomes larger in size, stiffer and possibly its exposure efficiency reaches a plateau.

  • Open source lab tools

    Scientific instrumentation tends to be expensive: the long r&d&calibration cycles necessary to produce a precise and reliable tool have a direct impact on prices. However, there is a growing number of initiatives that aim to distribute or sell low-cost, “open” tools/techniques, e.g.,

    Will openness foster science education and proliferation, specifically in places that cannot afford branded machinery? Is low-cost an observer-independent synonym for low-quality? where quality might mean any combination of reproducibility, availability of debug/maintainance support, etc.

  • The order of things ; what college rankings really tell us – M. Gladwell

    Not exactly news, but this piece from the New Yorker explores how various – arbitrary – choices of metrics used to rank higher education institutions (e.g. graduation rate, tuition etc.) lead to very different results.

    This would not be of much consequence, if these charts were not routinely touted as authoritative by the school deans themselves, and used in all media to bias the perception of “excellence”.

  • NSA: Possibly breaking US laws, but still bound by laws of computational complexity – S. Aaronson

    Scott Aaronson argues that the most widely used “attacks” on citizens’ privacy are the most straightforward (“side-channel”, e.g. lobbying either the standardization committees to put forward weak encryption protocols or commercial software vendors to plant backdoors in their network services) and do not involve disproving mathematically-proven techniques. Social engineering, old as security itself.

    Bruce Schneier, a prominent security expert, gives a few practical tips to have a better illusion of online privacy.

Last but not least, XKCD’s analysis is ever so accurate:

(embedded from )

Instead of recalling why this seemingly innocuous chubby Rastafarian is and has been one of the spearheads of contemporary debate about future for .. three decades now, let’s focus on this monologue of his recently recorded by EDGE.

Lanier debates that the widespread usage of networking has reduced the individual to a trade object of corporations and markets, rather than empowered a new middle class that “creates value out of their minds and hearts”.

The promise of the early Internet, to horizontally connect individuals in a heterogeneous global but ultimately personalized trading ground, transformed in a brutal ecosystem where there is no strategy, no planning, only instantaneous profit-oriented logic.
In this heavily peaked pyramid, players that are unable to perform real-time, ubiquitous information gathering and decision making are simply left out, as a necessary byproduct of an “early adopter” effect: the first takes all.
(Can this be considered a consequence of finite resources/finite “context size” and/or of small-world network characteristics? Just my personal note, will think about this)

He also points at possible interpretations of up-and-coming technological developments and their use in counteracting this “trend” to restore the production of value to the individual user, which is, he argues, the only way for civilization not to end as either “Matrix or Marx” (avoidable catchy quote).

Enjoy this hour-long thought-provoking rollercoaster blah blah, it will crack your mind open like a veritable crowbar!



Open Source Ecology

August 23, 2011

To recognize that human civilization needs to re-learn to live IN the ecosystem, and not OFF it.

This is the philosophical manifesto and the life project of Marcin Jakubowski and a growing crew of concerned inventors, builders, makers; to exploit centuries of technological advances and package them as open-source hardware tools for community building, with locality and sustainability as core values.

More concretely, quoting from the Working Assumptions of the Open Source Ecology wiki,

Civilization are shaped by their resource base. The resource base is what gives people power. By controlling others through an economic or social hierarchy, we can control resources, and thus gain power. Resource conflicts occur because people have not yet learned to manage resources without stealing. Society has not transcended the brute struggle for survival. We remain on the bottom steps of Maslow’s pyramid. Transcending resource conflicts by creating abundance, first for hundreds, then for thousands of people, is now possible if knowledge flows openly and advanced technology is applied to produce goods.


Education, media, and social engineering programs have subjugated human integrity to passive consumerism, with its related problems (resource conflicts, loss of freedom such as wage slavery). The only way out of this is creating a framework within which humans can prosper: provision of true education, learning of practical skills, stewardship of land, advanced technology for the people, and open access to economically significant know-how.

See Marcin make his point about transcending artificial scarcity: