Rhapsody On Metaphor & Intellectual Pleasure

Further, metaphors must not be far-fetched, but we must give names to things that have none by deriving the metaphor from what is akin and of the same kind, so that, as soon as it is uttered, it is clearly seen to be akin….

– Aristotle, Rhetoric 1405a

What are we doing when we aim for a semantic performance to be apt, profound, suggestive, provocative, poignant, obscure, entertaining, funny, or shocking?

In some sense, we’re looking to “do things with words:”   we’re aiming at perlocutionary uptake.  Examined from a somewhat absurd, but nonetheless traditional, (Cartesian-solipsistic) standpoint of isolated (but somehow linguistic) consciousnesses:  we intuit certain entailments of our metaphors we hope our audience also intuits.  Suppose, however, that we acknowledge that we’re out on parole from brutish apedom specifically because we’re on this langue journey together.  Then, it’s hard to say which is more remarkable:  (1) that we use the metaphor function of speech (Gr. metapherein) as a vehicle for the telepathic transfer of intelligence or (2) that we use the same function to invite the kind of social bonding that spawns political community and democratic co-navigation of our sociopolitical, economic, and physical cosmos.   Metaphor isn’t just simile sans feature-mapping.  Part of the intellectual pleasure we derive is “figuring out” the entailments of the metaphor–just as we intuit the logic of a joke, or trace the curve of a sexualized body past the regime of obscuring couture.  Following Locke’s theory of property, because we performed the intellectual labor, its fruits belong to us:  entailments, punchlines, fantastic jouissance.

In another sense, we’re exploring the “adjacent possible.”  Since a metaphor is a narrative in miniature, these remarks apply equally to metaphors and narratives, allegories and stories.  The adjacent possible is always qualified by topic (however technical) and by the mindsets & mindsettings of the interlocutors involved.  Physicists expect aptitude from their peers.  So too chemists, biologists, botanists, sci fi aficionados, philosophers, moralists, and even ordinary purveyors of pop culture.  Blockbuster movies sell tickets.  Jokes succeed or fall flat in social settings.  So too peer-reviewed journal articles, books, songs, paintings, fashion statements, scientific theories, proverbs, and parables.  All of these meme-laden semantic performances function as mental suggestions, whispering, “Join me in these realms of possibility.”

Similarly, by means of hortatory metapherein, every semantic performance is an invocation, a future-naming.  Each is an open-canon meme-set, rhizomatically extending into sparkling projections of dasein.  All culture (indeed all nature, so transformed) is a holistic and myriad-voiced, open invitation to “get in where you fit in”–aesthetically, logically, and morally–in all of your existential, social, creative, and intellectual capacities.  We mold the world’s potential to our own.  Archimedes had a very specific adjacent possible that transformed his altered bathwater levels into a eureka experience.  The same thinker, enjoying a cordial, sativa-elevated conversation on a cool summer’s evening, may perceive entire worlds in the same grain of sand she nonchalantly trampled after her last department meeting.

At our most salient, as we “name the nameless” together, we craft magic words that cast powerful social spells on our common future, and the long tails of our shared imagination summon a world that our psychosomatically-primed neurochemistry finds worthy of dopamine release.


Select References:

Aristotle, Rhetoric.
J. L. Austin, How to Do Things with Words.
William Blake, Auguries of Innocence.
Ted Cohen, “Metaphor, Feeling, and Narrative.”
Gilles Deleuze & Felix Guattari, “Rhizome” in A Thousand Plateaus.
Martin Heidegger, Being and Time.
Stuart Kauffman, Investigations.
George Lakoff & Mark Johnson, Metaphors We Live By.
John Locke, Two Treatises of Government.
Paul Ricoeur, The Rule of Metaphor.
Ferdinand de Saussure, Course in General Linguistics.


On Global Economy and the Spirit of the Age

“Some men aren’t looking for anything logical, like money.  They can’t be bought, bullied, reasoned, or negotiated with.”
The Dark Knight

“America isn’t a country.  It’s a business.  Now give me my money.”
– Killing Them Softly

Theoretically, capitalism isn’t the problem.  Aristotle once argued that democracy was the “least among evils” as a political architecture (compared, for instance, to tyranny and oligarchy).[1]  Similarly, perhaps, capitalism…among economic architectures.  It’s certainly more egalitarian than feudalism.  No.  Capitalism isn’t the problem.  The way global capital is practiced today is a problem.  Hey U.S. citizens, are you surprised that the world is turning evil?  The legal mandate for corporations in the USA is that they maximize profit to shareholders.  The army of American corporations is an army of money-robots with no morality except for the single virtue:  maximize profit.  We made it that way.  It’s not just the norm.  It’s in the legal fabric that is subject to our votes.  If we want a better world, let’s change the laws that govern corporations, allowing them to be socially responsible.  Then let’s motivate corporate social responsibility.  After all, we’ve got centuries of traditions that sustain the legal fiction that corporations (of any size) are rational agents.  What kind of rationality would you give to an army of robots with super-human economic strength?  What future exists for a world ruled by ruthlessly greedy, giant robo-dinosaurs?

Suppose we win the race to the Singularity…and lose our very humanity in the process.  Suppose we accomplish Strong AI in ways that would make this generation’s Einsteins blush.  We map the physics of subatomic particles.  We build warp drives.  We travel the known universe in seconds.  These accomplishments would be marvelous, breath-taking, ground-breaking, and awe-inspiring.  Far better than pyramids.  But what if the trade-off is a world that is heartless and inhumane?  Is it worth it?

Yesterday, I saw a bum get arrested for sitting on a curb in the 99 Cent Store parking lot.  Was his crime being poor?  Smelly? Unsightly? Disturbing working class consumers trying to score some cheap coconut water and sunglasses?

Today, I got angry at a California Buddhist trying to tell me “you can’t take it with you.”  Can I leave it to my kids?  My community?  Do the words “legacy” or “heritage” mean nothing?  Don’t get me wrong.  Buddhism is, overall, a peaceful belief-system.  As a critique of Christianity, I used to say that “resurrection puts the Ego back into reincarnation.”  The issue I had today with the California Buddhist was precisely that of Ego.  Individualism.  Buddhism grown individualistic.  Here was a suboptimal display of Buddhism confronting private property.  He says, “You can’t take it with you.”  I ask, “Does that mean I can’t give back?”

Belief systems are social.  Moral choices are social.  Moral judgments are social.  Indeed, language is social.

Aristotle talks about how we owe our parents an infinite debt.[2]  They gave us the infinitely valuable gift of life.  What do we owe to our parents?  What do we owe to the rainforest, the oceans, the earth?  What do we owe to Homo Erectus, the first African story-tellers, the first Indian musicians, the first painters of caves, builders of cities, tamers of goats, planters of crops…Aristotle, Da Vinci, Bach, and the vast wealth of our global cultural heritage?  What do we owe to the sun, the moon, the solar system, the galaxy, the universe?  What do we owe to the love of our friends, our family, our community?  All of these are the common heritage of humanity.  All of these are infinite debts.

Infinite debts can’t be calculated.  They can never be repaid.  That doesn’t mean we can’t be accountable to them.  Kant said, “Keep your promises.”  Even financial ones?  What if you fall on hard times?  What if you’re a mega-bank?  Sorry, Kant.  Worse than promise-breakers are ingrates in the face of magnanimity.  Use your judgment.  Which is worse?  A man who repays every penny of debt he ever borrows…but lacks gratitude for the infinite gifts at his fingertips (e.g. despises his parents, extorts usury on his borrowers, relentlessly harvests all common lands for personal profit)?  Or on the other hand, a person who cannot always repay her debts but fosters constant gratitude for life among her peers:  who creates, collaborates, and shares with her community?  Keep your promises.  Even more:  be grateful.

Today’s global economy wants to privatize the global abundance that is our common legacy.  And today’s global economy wants to make you in its image.  Not only are you expected to believe in private property, you are supposed to be motivated by private property to the extreme.  Coconut water and sunglasses.  Employment ads.  Patent trolls. Exxon. All of this is backward. Today’s ethos murders our common global heritage so that it might dissect it into private profits for a privileged few. Isolationist privatization breeds vicious social Darwinism.  It’s not just corporations that need new incentive.

Fortunately, at least for networked individuals, it’s not all bad news.  A shift is afoot.  Clay Shirky outlines it in his book Cognitive Surplus.  Projects and movements like Wikipedia, Wikileaks, the Open Source movement, the Pirate Party, Arab Spring, Occupy, and Makers of all sorts are emerging.  The motives here are not for profit.  Even Aristotle would laud creating and sharing and contributing freely out of our personal abundance as a noble and grateful response to the infinite gift we’ve been given.

Forces in the global economy systematically minimize the infinite gratitude that defines our humanity.  These forces are its moral defect, its ugliness.  Today’s practice of global economy is a bad habit.  Let’s change it.


[1] Aristotle, Politics, III.11.
[2] Aristotle, Nicomachean Ethics, 8.14.

Creation & Compensation

“All paid jobs absorb and degrade the mind” – attributed to Aristotle (Google it)

So far as I can quickly ascertain, the quote “All paid jobs absorb and degrade the mind” is a paraphrase of Politics, 1328b-1329a, “But at present we are studying the best constitution, and this is the constitution under which the state would be most happy, and it has been stated before that happiness cannot be forthcoming without virtue; it is therefore clear from these considerations that in the most nobly constituted state, and the one that possesses men that are absolutely just, not merely just relatively to the principle that is the basis of the constitution, the citizens must not live a mechanic or a mercantile life (for such a life is ignoble and inimical to virtue), nor yet must those who are to be citizens in the best state be tillers of the soil.”  (tr. Rackham)

Just previously, however, Aristotle had stated, “These then are the occupations that virtually every state requires (for the state is not any chance multitude of people but one self-sufficient for the needs of life, as we say, and if any of these industries happens to be wanting, it is impossible for that association to be absolutely self-sufficient). It is necessary therefore for the state to be organized on the lines of these functions; consequently it must possess a number of farmers who will provide the food, and craftsmen, and the military class, and the wealthy, and priests and judges to decide questions of necessity and of interests.”  In this passage, Aristotle uses ‘priests’ (ἱερεῖς) as a synonym for ‘councillors’ (βουλευομένους).  So apparently, it’s not that Aristotle thinks all paid labor is ignoble, just that it’s ignoble for citizens to work and be paid full-time for anything other than military, judicial, or legislative activities in the community or “partnership” (κοινωνίαν, 1252a).  IOW, what today we’d call “the work of politics.”   Specifically, those who strive to be Aristotelian citizens shouldn’t be farmers, merchants / bankers, or “mechanics” (post Industrial Age, read “factory workers,” the great 19th and 20th C stock metaphor for all labor).  Aristotle’s pupil Alex built Empire.

Of course, in Plato’s great work on politics, he observes that “no one wishes to rule voluntarily, but they demand wages as though the benefit from ruling were not for them but for those who are ruled” (Republic, I.345e tr. Bloom).  The entire passage from which this quote is lifted in Republic I resonates with the sentiment “All paid jobs absorb and degrade the mind”.  Summary:

[Socrates:] “Then this benefit, getting wages, is for each not a result of his art; but, if it must be considered precisely, the medical art produces health, and the wage-earner’s art wages; the housebuilder’s art produces a house and the wage-earner’s art, following upon it, wages; and so it is with all the others; each accomplishes its own work and benefits that which it has been set over. And if pay were not attached to it, would the craftsman derive benefit from the art?” [Thrasymachus:] “It doesn’t look like it,” he said. [Socrates:] “Does he then produce no benefit when he works for nothing?” [Thrasymachus:] “I suppose he does.” (Republic, I.346d tr. Bloom).

As I see it, Plato is saying something like this:  Artists are inherently valuable to communities. Artists create value by the active exercise of their virtue of being artists. Each artisan produces benefit apart from, before, and beyond any transactional wages that could later be attached to the benefit she creates.  Initially, the artist creates benefit for herself and whomever she, in her magnanimity, gifts with that benefit.  For example, the shoemaker would have the best shoes in abundance…but only shoes.  The doctor’s family would be healthy, but might struggle to plant crops or forge flatware.  The community recognizes the potential communal benefit of the artist by embracing her art and asking her to benefit the community rather than just herself.  In exchange, they offer her wages to benefit them rather than only herself. The community grants the artist a measure of Universal Sign (Marx, money trades for anything) in exchange for the Particular Signed (the artisan’s work).  Higher still, I’d like to think that it really can be gifting in both directions, with the notion of “exchange” abstracted as far as possible.

As far as Aristotle is concerned, the situation is even worse than slavery being his next point in the book.  Sadly, it’s quite close to being his first point in the book.  IOW, it’s foundational.  His myth is:  we’re born masters & slaves.

“In this subject as in others the best method of investigation is to study things in the process of development from the beginning. The first coupling together of persons then to which necessity gives rise is that between those who are unable to exist without one another: for instance the union of female and male for the continuance of the species (and this not of deliberate purpose, but with man as with the other animals and with plants there is a natural instinct to desire to leave behind one another being of the same sort as oneself); and the union of natural ruler and natural subject for the sake of security (for he that can foresee with his mind is naturally ruler and naturally master, and he that can do these things with his body is subject and naturally a slave; so that master and slave have the same interest).” (Aristotle, Politics I.1252a)


English-Greek for passages cited:
– Aristotle, On Work: http://bit.ly/WoSGs8
– Plato, On Work:  http://bit.ly/Uw53TR
– Aristotle, On Master-Slave:  http://bit.ly/VG9xur

English Plato (with Stephanus numbers):
– Republic (tr. Bloom):  http://bit.ly/13e1do6

English Aristotle (with Bekker numbers):
– Politics (tr. Carnes Lord):  cheap on amazon, hit me with a link if you find it online free.

Secondary Refs:

– Politika:  http://www.iep.utm.edu/aris-pol/
– Politeia:  http://bit.ly/UDrRCX (Bloom’s Preface)
– Zoon Politikon: http://en.wikipedia.org/wiki/Wage_slavery

An AI Direction for Today’s Giants


Google claims to have built “a web of things” to help drive its new Knowledge Graph.  From words to concepts and back?  Just as third-party researchers are using Google’s search algorithm to find biomarkers that cure cancer, Google is claiming to have “found concepts.”  What kind of concepts?  Google’s Norvig explains, “We consider each individual Wikipedia article as representing a concept (an entity or an idea), identified by its URL.” So Google’s using a Wikipedia-derived Explicit Semantic Analysis to achieve Semantic Search.  Novel.

Meanwhile, Bing is doing Social Search…using Facebook’s Social Graph.  Great for seeing what shoes or hotels or articles your friends like…and other “niche knowledges.”  Not so great outside your community’s niches, your communal “filter bubble.”  (Google ‘s Knowledge Graph tackles the problem from the other direction:  start with the most generic knowledge niches.  If you’re not searching for Da Vinci, you might not get Knowledge Graph.)

Then there’s Apple getting sued over SIRI for “overstating the abilities of its virtual personal assistant.”  Who’s not overstating these days?  Apple’s ad teams have tailored a message that achieves the precise amount of ambiguity to maximize sex appeal and plausible deniability.  The suits won’t stick.

Of course, everyone’s attempting to build brand loyalty so they can rake in dollars.


Deleuze & Guattari define philosophy as the creation of concepts.  I marvel at Google (+Wikipedia), Bing (+Facebook), and SIRI.  They are creating concepts–at least of a certain kind.  When you search for Da Vinci on Knowledge Graph and it groups renaissance painters together, this appears as abstraction, generalization.  When you search SIRI for Indian Food and she finds restaurants in your area, this is a form of pragmatic localization.  When you search Bing for fashion, and it tells you what your friends are wearing, it’s creating concepts in the space of social awareness.

Intelligence is metaphor all the way down.  All the services described above metaphorize in some nascent fashion. Lakoff and Johnson summarize:  “the essence of metaphor is understanding andexperiencing one kind of thing in terms of another.”[1]  General AI can be achieved by building out multi-dimensional metaphorizing algorithms.

Interestingly, SIRI, Google and Bing each assume a specific want (desire) in the user, and tailor their service accordingly.  SIRI assumes you don’t want abstract knowledge about the history or characteristics of Indian Food, but that you want to eat some, nearby, soon.  Google assumes you want general knowledge of Renaissance painters or other search topics.  Bing assumes you want to know what your friends and acquaintances think.

What if what you wantis general AI?  To achieve AI, concepts need semi-permeable membranes between them.  From Turner & Fauconnier’s “Conceptual Blending” to Ridley’s When Ideas Have Sex, ideas need room to breed.  As a first step in the right direction, I envision service that understands and generates metaphor.  At first, I want it to be capable of understanding why and when it might be apt to say  “Juliet is the sun,” “Man is a wolf to man,” or “You made your bed, now lie in it.”  For this, we need a Pragmatic Ontology, a subtle notion of what makes daily human actions meaningful.  Step two involves metaphorically extending the algorithms necessary for the first form of metaphorizing…finally achieving, for instance, an understanding of how identification with the hero of a story is a form of metaphor, how the move from string to a thing is metaphor, how the metaphorical process is ubiquitous.   That’s what I want to see built.

Afterward, I’ll be satisfied enough to navigate to a local Indian restaurant to contemplate Donatello’s brushwork like my friends do.


[1] Metaphors We Live By (1980), 5.

Co-creating Values

“Companions, the creator seeks, not corpses — and not herds or believers either. Fellow creators the creator seeks — those who write new values on new tablets.”

– Nietzsche, Thus Spake Zarathustra, Prologue

“Act only according to that maxim whereby you can, at the same time, will that it should become a universal law.”[1]

– Kant, Groundwork of the Metaphysics of Morals

@Kant.  First critique:  What are you smoking, Kant?  Too big, too fast.  In startups as in morality, inflated abstraction kills.  Get off your megalomaniacal dopamine.  Too many entrepreneurs and dreamers fail, thinking “Everyone will love it.  It’s gonna be huge.”  To some extent, Hegel reminds you that your philosophy is only good at blowing morality bubbles.   Despite your best intentions, truth (or knowledge) in morality isn’t like math or physics.  Your magnum opus dreamed of updating certainty in metaphysics to match Newtonian physics.  Granted, Kant:  Newton’s work titillates.  Enter Einstein.  Morality is contextual.  Still, your insight remains:  morality must always be shared.

@Kant.  Second critique:  your model of the self is isolated.  Perhaps it was politically necessary–in order to revolt from monarchy and aristocracy–to posit a robust notion of the individual, endowed by Reason with freedom, autonomy, individuality, and rights.  But highlighting individuality hides solidarity.  Highlighting the fact that we each have a unique genome hides the fact that we all share over 99% of it.  How much more do we share memes?  What first appeared as maxims now manifest as social memes,[2] metaphors capable of cementing solidarity among those who share and live by them.  Maxims (like fetishes) are private and unique.  Memes (like totems) are shared while remaining personal and lived.  Memes are values.  Go masturbate to your maxims, Kant.  I’ll take my meme-experiment to the pub.

@Nietzsche.  Nice Bible reference.[3]  How does one write on these “new tablets” of the heart?  We’re always the page, but we’re also the pen.  We author and coauthor each other every day.  I hear so many read you, Nietzsche, as an intensification of Kant & Descartes:  your command “Become hard!” can have a very individual ring in some ears.  Today’s egoists and identity-addicts listen thus.  But here, you seek companions.

@Nietzsche.  In logic, the moral premise (often hidden) is the premise that contains the should.  The should is impotent without the willAction alone matters, and action requires only will, not should.   In a world where the standard template for should has been hijacked, what new moralities are possible?  What does it mean for will to free itself from alien law, to roar to death its 1,000 golden “shoulds?”  At least that desire and impetus-to-act feel authentic.  Becoming a hero of the will, a liberator of choice, does not always require isolation as a prerequisite.  It often requires companions.

What does it mean to be a creator of values?  Values aren’t maxims.  Maxims can be scribbled in the dark.  Values need at least one like, one share.  Values are memes.  They live somewhere between maxims and Laws of Reason:  never private, never universal, rarely widespread, always shared.  This is the New Enlightenment–don’t strive for Universality (like Kant), or even virality (like today’s fame-frenzied startup marketeers).  Though spread is an objective metric of any value, from the inside, creating values always feels like sharing.

The first step toward creating values is to have values.  Transvaluate your maxims into memes.  Live in public.  Let your memes compete with other memes for survival.  If the lion’s share of this self-overcoming feels “hard” at first, it is.  And it only gets harder.


[1] For Kant, a “maxim” is a private principle for guiding personal behavior.

[2] In Dawkins’ original sense: a “meme” is a “unit of mimesis.”

[3] 2 Corinthians 3:3.

Slices & Traces

Slices & Traces
In graduate school, I once heard a medieval scholar remark that we now knew what Thomas Aquinas did on nearly every day of his life.  While such a feat is perhaps the wet dream of a medievalist, technology is reaching the point where the same may soon be true of me or you.

Historians compile numerous traces (any historical artifact that says “Thomas was here”) into slices (e.g. a biography).  In the digital age, what fascinates me is that numerous ready-made slices of our virtual lives may be compiled easily from databases that archive massive amounts of our personal digital traces.

I recently had the opportunity to experiment with Stanford’s Muse Project, which provides various analyses and visualizations based on personal email history and browser history (sentiment analysis, social group change over time, et al.)  Pros:  the program provides an interesting slice of one’s virtual self.  Cons:  the slice of my personal history recorded in my email database feels partial and one-sided.[1]  For another example, consider Facebook’s recently released “timeline.”  The history embedded in your Facebook timeline is yet another  slice of your personal history.  Each slice tells its own story, albeit an incomplete story.  A slice is just a slice.

What if, like an fMRI, we were able to capture and compile slice upon slice?[2]  Would the slices add up to a complete picture?[3]  What if one were to aggregate and integrate all the slices of one’s virtual life?  What if you had the tools to capture & integrate your own personal data from email history and browser history and add that to your data from social networks (Facebook, Twitter, LinkedIn), dating sites (eHarmony, Match, OKCupid), bookmarking sites (Stumbleupon, del.i.ci.ous, digg), music sites (Pandora, Grooveshark), movie sites (Netflix, Blockbuster), video sites (Youtube, Vimeo, Dailymotion), commerce sites (Amazon, ebay), banking sites (Mint, Quicken), location services (FourSquare, GPS), SMS history, and blog corpus?  What if, to that already rich textual and social data, one added perceptual data capture via webcams, haptics, and EEG/GSR?  What if one were to sift, analyze and integrate the data using textual algorithms (corpus linguistics, LSA, ESA, sentiment analysis), social algorithms (network & influence analysis), and perceptual algorithms — replete with visual recognition (facial, gesture, object, movement), audio recognition (voice, music, sound), and touch recognition (texture, heat, pressure)?  (See Figure 1)

Slicing & Tracing
Such a system of integrated personal data, collected en masse (even if anonymized), would prove invaluable to social scientists, historians, marketers, Big Brothers, and researchers of all ilks.  Although we’d never achieve Rankean history “wie es eigenlich gewesen ist,” (as it actually happened) through such a system, it represents a potential tool (among other tools we’re developing) that will soon get us closer to historical realism (or even hyper-realism).  What I’d like to discuss today is not the fine-grain detail we may someday achieve by integrating slices and traces.  Instead, today I want to talk about the slicing and tracing.

Suppose you mummify your information…all of your information.[4]  You’re still just a data-fossil in a museum exhibit a millennium from now (and if everyone gets mummified, probably a poorly-visited exhibit).  But your data doesn’t even make it to the museum without first undergoing some form of condensation and selection.[5]  I don’t care how much you love your grandpa, you’re not going use your entire life to watch a second-by-second video of his entire life.

Before the digital age, condensation and selection happened naturally in places like family photo-albums and dinner-table stories.  These human-sized brain-morsels could be chewed and digested comfortably.  In the digital age, a deluge of data makes you cross-eyed and bloated while historians babble about Kim Kardashian and advertisers hypnotize you with french fries.  As we speak, historiography is being asked to develop some frighteningly powerful tools to condense uncompressed information, select salient aspects, and present us with soundbites (Think Robin Williams in The Final Cut).  Too much data is the first challenge facing next-gen story-telling gurus.

But too much information (TMI) is merely the prima facie challenge.  The real challenge, as I see it, is not TMI but too little intelligence.  I’ve often said that “after the Information Age comes the Intelligence Age.”  I want to see a generation of “intelligence scientists” rise up to replace today’s “information scientists.”  Would you rather preserve your intelligence (creativity, intuition) or your information?[6]  What would that even look like?



In the spirit of Aristotle and Nietzsche, I’ve nicknamed the data-integration algorithm-hub “VirtuAlly.”


[1] Also, the sentiment analysis engine in Muse is amateurish.

[2] The current discussion assumes that the capture, aggregation and integration of data would be for private and personal use only.  With increasing sousveillance, each of us may be able to compile an increasingly complete picture of our personal histories.  As technologies for personal data capture, aggregation, and integration progress, the following philosophical stance will also snowball in importance:  an individual’s data is his or her inalienable property.

[3] Temporality is a dimension common to each of the following data slices.  Each slice is like a layer of bedrock, and data archived in each aggregates many fossilized traces of one’s virtual life.  Time-stamps are common in each digital trace, making chronological sorting easy.  Who will standardize the aggregation and integration of these slices, as we once standardized the USB port?

[4] Lifenaut.com offers a digital (and biological!) time-capsule for would-be immortality-seekers.

[5] By condensation I mean something like summary, and by selection I roughly mean meme-discrimination.

[6] Arguably, neither is any good without the other, so my answer is “both.”



What story does your data tell?

New York Times data analyst on visualization


Creates a slice of your personal history using your EMAIL, with capabilities for BROWSER HISTORY (best in Firefox).  The program runs securely on your local machine, so there’s no chance your data will make it to the cloud.  I’ve experimented with this program with interesting results.

Capture your LOCATION DATA.

Capture BROWSER HISTORY.  a friend of mine built this.




Interactive time capsule, digital self-storage space (digital locker)

Also, they store your DNA…free (suggested donation $399)


Review of Patricia Kuhl, “Is speech learning ‘gated’ by the social brain?” Developmental Science 10:1 (2007), pp.110-120.

“I consider the body of a man as being a sort of machine…” – Rene Descartes

In this article, Patricia Kuhl discusses the results of a set of experiments designed “to compare the efficacy of live social interaction as opposed to televised or audio-only presentation as vehicles for learning foreign-language material” for nine-month olds (112).  Kuhl found that infants at this age learn more from interaction with their mothers than from television or audio only.  Kuhl did not discuss any experimental designs in which televisions had fur and tits and had raised the child from birth.   Moreover, Kuhl did not discuss experimental designs in which infants had received, from birth, several hours a day of rigorous operant conditioning that primed them to “veg out” in front of televisions.  Regardless of such omissions, in summary, when it comes to language acquisition, Kuhl found that actual mothers are better mothers than televisions or audio devices.  Kuhl’s results reinforce what human mothers have suspected for hundreds of thousands of years—if a mother wants her infant to learn a language, her best strategy is to socially reinforce language skills in a natural and nurturing setting.

Kuhl’s article is a welcome, if partial, corrective to a sad state of academic affairs.  The HUMANS ARE MACHINES metaphor, in vogue since Descartes, has been refined in the past few decades into the deeply-entrenched and stratified metaphor HUMAN BRAINS ARE COMPUTERS.  The metaphorical entailments reach all the way down to the minutiae of brain function, in which language-users are said to “process” “information.”  (If information is information, why would it matter to an infant whether linguistic information comes from a TV screen or from their own actual mother who carried them for nine months, gave birth to them, then suckled, cuddled, nurtured, and loved them for nine more?)  All hail the Information Age.

If adult human brains are computers, then the development of those brains implies the metaphor that INFANT BRAINS ARE LITTLE COMPUTERS.  Kuhl begins her seminal article by announcing the theory she wishes to depose, what she calls “the computational conclusion”:  “Research in the last decade has provided some hints on how infants ‘crack the speech code’ – they possess powerful computational strategies that have been shown to advance early language learning” (110, emphasis mine).  Kuhl partially explodes the BABIES ARE COMPUTERS metaphor by promoting the hypothesis that normal social interaction is crucial to an infant’s language development.

It is understandable that with the advent of widespread personal computing and internet-usage in the early 1990’s in the Western first-world, the BRAINS ARE COMPUTERS metaphor might capture the popular imagination as a pop-culture meme and the academic imagination as a potentially fecund research tool.  The philosophical underpinnings of the psychological establishment’s methodologies were not immune to the spread of this powerful meme.  Kuhl’s genuflection to this philosophical style of analysis is nowhere more evident than in the section entitled “What constitutes a social agent?” She parses social agency into interactivity, contingency, reciprocity (turn-taking), and asks such questions as, “…would an inanimate entity, imbued with certain interactive features, induce infant perception of a social being?  And if so, could infants learn language from such a socially augmented entity?  …Would infants learn from an interactive TV presentation, one in which the adult tutor was shown on a television but actually performing live from another room so that contingent looking, smiling, and other reciprocal reactions could occur?  Could infants learn a new language from a socially interactive robot?” (115)

Kuhl discovered that social interaction is integral to language-learning in infants.  But isn’t social interaction, especially between a mother and her infant, such a bother?  Isn’t there another way?  What about robots?  Computers?  Televisions?  If mothers could simply outsource to robots the time they spent teaching language to their infants, they would be much more productive in the workplace and would have far more leisure time.  The same felt-imperative, of course, applies to teachers and professors from preschool through post-doctorate:  why waste time on face-to-face instruction if we can make learning virtual?  Let’s find a way to make machines educate us.  If only the world of The Matrix were here already, we could plug ourselves in…and, even better, we could plug in our infants.  Why do today’s Luddites persist in the preference of Mother to Matrix?

The answer may have to do more with Nim Chimpsky than Noam Chomsky.  In a section entitled “Neurobiological connections:  communicative learning in animals,” Kuhl notes that “young birds operantly conditioned to present conspecific song to themselves by pressing a key learn the songs they hear” (115).  While Kuhl fails to address primates in this section, it is well-known that some great apes raised in zoos from infancy are taught to comprehend human language on computer screens.  Because no experiments have yet been performed on human infants raised in zoos from infancy and taught human language on computer screens, evidence that might balance the following claims is scant.  However, it is clear that great apes raised in human environments from birth and taught American Sign-Language (ASL) are not only immersed in human communication, but also in human culture.  Of course, such apes lack an actual mother’s breast-feeding and care, and, being another species, could never integrate into a society of homo sapiens, try though they may.  In one famous instance, Lucy, a chimp raised by human caregivers from birth to 12 years, after being placed in a chimpanzee rehabilitation center, displayed sexual attraction only toward humans.  It is reported that Washoe, another chimp raised by humans from birth, when placed among chimps for the first time, had trouble adjusting to the idea that she was not human.  However, after integrating into her community of ASL-speaking chimps, Washoe spontaneously used ASL to communicate with others and even taught the language to her offspring.

Given the foregoing, it might be argued that caregivers nurture their young to be adaptable to a culture.  Inculturation (socialization, “Bildung”) is a Gestalt process—complex, intimate, generational and communal.  In humans, language acquisition is only one part of this process.  Human young adopt skills that help them adapt to their local environment.  For instance, Michael Merzenich points out that 40% of boys in San Paulo can bounce a soccer ball on their heads by age six.  Freud enlightened the West to the fact that adult human sexuality begins to form in infancy.  Feral children seem ill-adapted.  Children raised by TV often demonstrate undesirable behavioral and psychological traits.  In this context, Kuhl’s observation that “language learning relies on children’s appreciation of others’ communicative intentions, their sensitivity to joint visual attention, and their desire to imitate” (110) is welcome.

The notion that inculturation is a Gestalt process points to philosophies of language and cognition that offer stark alternatives to the “computational” model.  For instance, Michael Reddy reminds us that speech is not a mere “conduit” for information. Searle and Austin remind us that speech is a human action, and that each utterance may be fruitfully analyzed first and foremost within the realm of human interaction.  Crucially, Nietzsche reminds us that the desire to “exist socially” lies at the origin of language.  Many post-structuralists suggest that symbolic representational systems are ubiquitous—including Barthes who argues that “fashion statements” can be analyzed as signs in a system, and Lacan who contends that the unconscious is structured as a language.  Upshot?  Kudos to Kuhl:  Language acquisition, in any form, is an aspect of socialization.