Extensions and interfaces

October 16, 2013

I would like to gather here data and interpretation regarding artificial extensions to human capability (the broadest definition of “technology”): We are witnessing transition from “technology-as-screwdriver” to “technology-as-cognition-extension”? More precisely, exactly how advanced must a technology be, until one cannot realize anymore to be using it?
This abstracts one step beyond A.C.Clarke’s “Third Law”: technology and magic will, at that point, be reduced to commonplace human experience, and therefore become indistinguishable from it.
It’s a rather bold statement, and I’m no starry-eyed singularitarian. Let’s start with a simple analysis by restricting to present-day tangible R&D results, and leave end-of-history predictions to fortune tellers.

Large scale: Behavioral trait clustering
October 29, 2012 : “We show that easily accessible digital records of behavior, Facebook Likes, can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender. The analysis presented is based on a dataset of over 58,000 volunteers who provided their Facebook Likes, detailed demographic profiles, and the results of several psychometric tests. […]”

Personal scale: Distributed training for machine learning
October 11, 2013 : “Qualcomm also envisions alternatives to app stores, which he called “experience stores,” allowing users to download expertise into their consumer products”. Have a look at the original EETimes article.
While neural networks aren’t exactly news, the idea of “sharing” training across devices seems intriguing. I wonder whether this concept is of broader applicability.

Personal scale: Human-computer interfaces
This is where human-machine interaction, on a personal (affordable, household) scale started: computer mice, hypertext introduced in 1968 during Douglas Engelbart’s “mother of all demos” (official title “A Research Center for Augmenting Human Intellect”): http://www.dougengelbart.org/firsts/dougs-1968-demo.html

… and this is (a small selection of) where we are now:

Technical Illusions CastAR, an Augmented Reality “platform” composed of glasses with integrated projector, wand/joystick, backreflective mat (AR) or VR glass add-ons.
Video here
Still raking funds from the Kickstarter community, but apparently it’s going well. I’m a bit concerned about all that hardware one has to deploy, especially the “mat”. Apart from showers, only one application I can think of benefits from mats.

Thalmic Myo.
Video here
This one is an interesting concept: it’s an armband device that integrates accelerometer and bearing sensors with neural readout, so muscular twitching such as finger contraction can be correlated with movement of the limb as a whole, allowing for very expressive interaction. It has been available for pre-order for a few months now and will sell for 149 USD from the beginning of 2014, and I’m seriously considering getting one.

Leap Motion, and extensions thereof, e.g. the DexType “keyboard” software, see below.
Video here
The Leap Motion simply processes optical range information (possibly using “structured light” like the Microsoft Kinect), so a number of artifacts in the gesture recognition are to be “engineered against”. However, offering an open SDK was a winning move, there are tens of application and games in various stages of development being offered on the Leap store.

Possible implications
Adaptive communication: i.e. terminals that are aware of user patterns and sync accordingly, “sync” meaning information display based on remote context (e.g. remote user busy or focused on other). Attention economics brokerage.
Are we heading towards higher-order communication, i.e. in which one won’t communicate with a machine one character at a time but through symbols, sign language, ideograms?
Next level: J.Lanier’s “postsymbolic” communication in cuttlefish; the “body” of a user (intended in an extended sense, i.e. with hardware enhancements) becomes a signaling device in its own right (e.g. flashing, changing shape/”state”, radiating information etc.)

In fact, I think it’s only natural that machine interfaces are to be evolved in order to effectively disappear, the only question is when will this transition occur.

  • Open source lab tools

    Scientific instrumentation tends to be expensive: the long r&d&calibration cycles necessary to produce a precise and reliable tool have a direct impact on prices. However, there is a growing number of initiatives that aim to distribute or sell low-cost, “open” tools/techniques, e.g. http://www.openbiotech.com, http://publiclab.org.

    Will openness foster science education and proliferation, specifically in places that cannot afford branded machinery? Is low-cost an observer-independent synonym for low-quality? where quality might mean any combination of reproducibility, availability of debug/maintainance support, etc.

  • The order of things ; what college rankings really tell us – M. Gladwell

    Not exactly news, but this piece from the New Yorker explores how various – arbitrary – choices of metrics used to rank higher education institutions (e.g. graduation rate, tuition etc.) lead to very different results.

    This would not be of much consequence, if these charts were not routinely touted as authoritative by the school deans themselves, and used in all media to bias the perception of “excellence”.

  • NSA: Possibly breaking US laws, but still bound by laws of computational complexity – S. Aaronson

    Scott Aaronson argues that the most widely used “attacks” on citizens’ privacy are the most straightforward (“side-channel”, e.g. lobbying either the standardization committees to put forward weak encryption protocols or commercial software vendors to plant backdoors in their network services) and do not involve disproving mathematically-proven techniques. Social engineering, old as security itself.

    Bruce Schneier, a prominent security expert, gives a few practical tips to have a better illusion of online privacy.

Last but not least, XKCD’s analysis is ever so accurate:

(embedded from http://xkcd.com/1269/ )

Instead of recalling why this seemingly innocuous chubby Rastafarian is and has been one of the spearheads of contemporary debate about future for .. three decades now, let’s focus on this monologue of his recently recorded by EDGE.

Lanier debates that the widespread usage of networking has reduced the individual to a trade object of corporations and markets, rather than empowered a new middle class that “creates value out of their minds and hearts”.

The promise of the early Internet, to horizontally connect individuals in a heterogeneous global but ultimately personalized trading ground, transformed in a brutal ecosystem where there is no strategy, no planning, only instantaneous profit-oriented logic.
In this heavily peaked pyramid, players that are unable to perform real-time, ubiquitous information gathering and decision making are simply left out, as a necessary byproduct of an “early adopter” effect: the first takes all.
(Can this be considered a consequence of finite resources/finite “context size” and/or of small-world network characteristics? Just my personal note, will think about this)

He also points at possible interpretations of up-and-coming technological developments and their use in counteracting this “trend” to restore the production of value to the individual user, which is, he argues, the only way for civilization not to end as either “Matrix or Marx” (avoidable catchy quote).

Enjoy this hour-long thought-provoking rollercoaster blah blah, it will crack your mind open like a veritable crowbar!

[blip.tv http://blip.tv/play/hLJxgs77WAA%5D


PULSAR (1990)

January 31, 2009

abstract wiggliness from japan

thanks to PinkTentacle