Legacy System


Android Jones, “Forward Escape”


This paper proposes the creation of what I call the “Legacy System,” a system whose design begins, in phase one, with a person’s systematic capture of their own personal data. It is a system for ensuring that data generated by a person remains of that person and for that person.[i] The system includes (1) an organization (conceived as non-profit or not-solely-for-profit), that issues (2) an iron-clad, user-protecting contract for (3) a device and operating system running (4) an application (“legacy software” / “legacy app”) that backs up (5) personal data to (6) a private, secure, user-controlled virtual machine. In phase two, the “big data” on that personal machine is subjected to (7) artificial intelligence algorithms (machine learning code) whose goal is to maximize (8) personal happiness (conceived as an ongoing exercise of virtue, with respect to both success and fulfillment).


We begin with human existence and meaningful human action as our primary value. Humans are a technological species. We use tools. History demonstrates that our species began differentiating itself from others with the invention of the handaxe. Following philosopher Andy Clark, the handaxe can be seen as an extension of the human body, of the human mind. Perhaps even more importantly, language is a human invention, a human technology. Language helps us form thoughts and communicate them to others. Language is the original telepathy. Fast forward to the digital age, and humans are still humans, but we are using digital technologies and, because of that, we are leaving digital traces or “data”. Following Matt Ridley, there is a reason why the handaxe and the smartphone are roughly the same size and shape. The human hand holds a smartphone as it would a handaxe. Both are extensions of the human body, the human mind. These observations make clear why it is crucial that in the Legacy system the human interface begins at the device and operating system level. In the generation of “data”, there is no break in the chain of “input” from human mind to human hand to smartphone to operating system to application.  This point is just as important for user experience as it is for the legal protection of any data generated by such means.

Humans, using technology, are the wellspring of data. The fundamental idea behind the Legacy system is to establish a private pool at the wellspring of data—before it escapes into the wider world. To use another metaphor, the Legacy system keeps the original “wet-ink” data, and releases a “copy” into the world (through an app, etc.). If users track themselves and retain a copy of their actions at the device and operating system levels, there can be no legal argument against the claim that the user owns the original.

Think about the sensors (and actuators) in your smartphone device. To name a few: camera(s), microphone, radio, Bluetooth, Wi-Fi, GPS, gyroscope, accelerometer, magnetometer, proximity sensor, thermometer, hygrometer, barometer, and ambient light sensor. Channeled through an operating system, these sensors and actuators provide the hardware infrastructure for the primary software functionalities that comprise the reasons we carry our smartphones: phone calls, SMS, email, internet, social media, navigation, and myriads of applications.[ii] Each time we use any of those higher level software functionalities, someone else is capturing our data inputs (e.g. Google search, Facebook like, etc.) Originally, however, that “search” or “like” originated with our all-too-human life and its perceived needs. Why put our lives in the hands of someone else who manifestly does not have our best interests at heart?

Personal data privacy has been in the headlines since Snowden. Recently, the Facebook and Cambridge Analytica scandals have highlighted the issue once again and have prompted some to call for a “User Data Bill of Rights.” Holding businesses accountable for their collection of user data (which is sometimes massive—looking at you, Google and Facebook) is certainly a good start, but doesn’t strike at the root of the issue. The Legacy system does.

Early Prototypes

The core vision of the Legacy system (and its early concept) is something I call “VirtuAlly,” inspired by a seminal article by Danny Hillis, the Quantified Self movement, and of course, Aristotle’s discussion of “Friends of Virtue” in his Nicomachean Ethics. The idea is to turn the self into Big Data and run Machine Learning over that data. IOW, the goal is to build a system to collect as many of my own digital traces as possible into a database. The machine learning that runs over that data would have the explicit goal of making my life better (and not, for instance, serving me ads or trying to sell me shit I don’t need). A truly personal AI. The following mind-map provides a glimpse into the data-capture side:

One shortcoming of this early concept is that data capture operates downstream from the application layer. As such, it is beholden to any number of “contracts of cohesion” which may cede the data as belonging to the platform.

Moving from concept to prototype, I have developed the following personal journaling system using IFTTT and Evernote, a project I call “LifeLine” (think “Life Timeline”). IFTTT.com (If This, Then That) is a service that allows users to create “recipes” (basically little logic modules) to connect up various popular online applications using front-door APIs. The “If” side specifies the inputs (or “triggers”) and the “Then” side specifies outputs (or “actions”). So IFTTT provides the logic, and Evernote serves as the data repository / database. Here is a sampling of the types of logic rules I have setup to generate input:


And a sampling of the output:


Steve Jobs promoted the principle that technology should be either beautiful or invisible. A benefit of the above system is that it operates invisibly. I simply go about my daily life, and the logic rules work behind the scenes to capture the data I’ve told them to capture and to archive it in Evernote. In my Quantified Self practice, I primarily use such data for health purposes (the system provides excellent data benchmarks for diet and exercise),[iii] time-management, and a sort of externalized (and infallible) memory. As more features are added to the system, it becomes clear that the database itself could function as a sort of “digital legacy” to be handed down to heirs—along with, or instead of, a shoebox full of photos. It is a step in the direction of digital immortality.

A problem remains, however. As mentioned before, the current early prototype falls prey to the same issues as the early concept—in the current architecture, data capture operates downstream from the applications themselves (e.g. Facebook, Gmail).

Prototyping Proposal (Data Capture)

Fast-track for prototyping the Legacy system. The hardware (device), operating system, and VM components could be off-the-shelf solutions. Device (smartphone) is conceived as GSM phone. Operating system is conceived as a kernel-hardened, open-source version of Android. Virtual machines would use something like Amazon Web Services (likely running Linux). Smartphone data plan could be negotiated via strategic partnership with a company like FreedomPop (uses Sprint & AT&T networks; currently offering “Privacy Phone” / “Snowden Phone”). If we are able to use off-the-shelf infrastructure, the main work would be building the “Legacy software” app, which is basically a massively powerful key-logger (actually, an all-activity-logger) that uploads daily to a Virtual Machine proprietary to the user.

Ideally, all data would be stored on the blockchain for security. Of current solutions, Ethereum seems adequate for the task.

Prototyping Proposal (General AI)

A discussion of the General AI involved merits its own conversation, and a separate paper, “Developing Conscious Agents”, is forthcoming (in collaboration with a developmental psychologist). For present purposes, initial prototypes for the Legacy system would begin with off-the-shelf machine learning techniques. This means the “Big Data” of the self would be collected privately and analyzed privately by a personal AI. Private, personal data collection lays the real and legal foundation for a culture of consent with respect to data. Opportunities would exist within this culture for sharing specific amounts and degrees of personal information, anonymized appropriately, with a communal AI whose goal would still be to help the community and its individuals maximize their personal and communal virtue. To be clear, there are two levels here: the personal AI, and an opt-in communal AI.

A highly abbreviated summary of “Developing Conscious Agents” is worth sharing, as its core ideas will scaffold the AI in all later generation VirtuAlly instances.  The word “developing” in the title is critical. Much ink has been spilled of late wondering if AI is best approached using the model of child development. Let’s take this strategy to its logical conclusion. The idea is to clone human intelligence as it develops in real time. In short, we propose developing a virtual agent modeled after a live newborn subject. In each instance of the experiment, the experimental design would include two developing agents: (1) an infant with real senses (and also equipped with virtualizing sensors, including camera, microphone, environmental sensors, et al), and (2) a virtual infant with virtual senses living in a virtual environment. The virtual environment, and all virtual bodies within it, are a physically realistic construct of the real world, driven by a highly accurate and granular physics engine (including, but not limited to an optics engine).[iv] Sensory data, collected from the real infant’s experience, streams to the virtual infant’s database where nested modules of machine learning algorithms constantly run over the collected data. The physical infant’s sleep periods provide extra windows for processing and engineering assessment. The virtual infant has the opportunity to learn EXACTLY what (and how) the real infant learns. Because the virtual agent’s conscious experience is simply actual experience copied into a virtual environment, the virtual agent “develops” exactly as the infant does, with dynamics such as joint attention, visual cliff, mirror phase, and theory of mind emerging for both agents simultaneously in real time.

The benefits of this experimental design are too many to elaborate here.  To highlight one, per Saussure’s linguistics, the virtual agent will inhabit a rich world of signifiers but also a rich world of signifieds. Like the real infant, the virtual infant learns through interaction with its caregivers and adapts to a rich physical environment and a warm social environment infused with a wealth of linguistic content. The mapping of physical experience to linguistic meaning allows for the formation of concepts and practical reason. The first 200 words a baby learns are not necessarily the “top 200 words” output by frequency analysis algorithms—although significant overlap is likely to occur. More importantly, the way in which an infant learns language (through oral repetition and the labor of learning to vocalize phonemes in the context of joint attention) will allow its virtual agent to follow the same path. Many impatient types in Silicon Valley will despise this experimental design because the experiment will take at least 18 years to complete. However, it solves the AI alignment problem.

It is this AI, properly aligned with human values, that will eventually serve individuals and communities as their VirtuAlly, their Friend of Virtue.


We misunderstand Danny Hillis’ dream of Aristotle (as an artificially intelligent personal tutor) if we assume it to be equivalent to what some today call “AI personal assistants”, e.g. Siri or Alexa. If we care about augmenting our own virtue, using everything from today’s computerized technologies to ancient techniques, we must set our sights higher.

In discussing existing prototypes for the Legacy system project above, I outlined my “LifeLine” project. Actually, before that, for years, I kept a journal. And even before that, I engaged in a pursuit of virtue as a social animal. That’s the true underlying technology here. That’s what’s foundational. If language is a technology, how much more so is how you speak (your idiolect, as well as exactly what you choose to say and when). If philosophy is a technology, how much more so is your personal philosophy a technology? And personal virtue is a technology. Once we understand personal virtue as a technology, we can hack it, tweak it, make it better. Like Susan Sontag, “I’m only interested in people engaged in a project of self-transformation.” If these kinds of people come together, the novelty of the technology we use for communal and personal transformation is immaterial. Our resources are both of the moment and of the millennia.


Aristotle, Nicomachean Ethics (350 BCE)
Andy Clark, Natural Born Cyborgs (2004)
Joel Doerfel, Slices and Traces (2012)
Daniel Hillis, Aristotle (The Knowledge Web) (2004)
Jaron Lanier, Who Owns the Future? (2014)
Cathy O’Neil, Congress is Missing the Point on Facebook: Americans Need a Data Bill of Rights (2018)
Tim Palmieri, What Sensors are in a Smartphone? (2018)
Matt Ridley, The Rational Optimist (2011)
Ferdinand de Saussure, Course in General Linguistics (1913)
Doc Searls, The Intention Economy (2012)
Gary Wolf, What is the Quantified Self? (2012)


[i] Following thinkers like Jaron Lanier (2014), “data” is defined here primarily as any information or digital trace generated in digital space by the actions of a human person, and secondarily as private information deriving from an outside source that is the rightful sole property of that person.

[ii] In Andy Clark’s sense, these become human functionalities, extensions of our human functioning (e.g. when is the last time you navigated without GPS?).

[iii] A high-ranking, explicit motivation in capturing data about myself is to track my physical and mental health. As such, all data captured should be subject to HIPAA protection.

[iv] IOW, the virtual environment is basically the Matrix. A side benefit of the experiment is that afterwards, you also have the Matrix (and can use it for things like discoveries in physics; like Feynman says, “there’s plenty of room at the bottom”).


Pragmatics in Praxis


This morning, I read a New Yorker article on A.I. entitled “Why Can’t My Computer Understand Me?”  It’s worth a read.  The article’s protagonist, Hector Levesque, denounces the Turing Test as too easy to scam.

I agree…with the proviso that, in the development of useful expert systems, we’ve reached a historic plateau in which, for business purposes, a useful metric is:  “Time to Turing-Complete” (TTTC).

My thinking on general AI still orbits a praxis-to-pragmatics approach, as opposed to development of highly specific algorithms that remain in the realm of mere semiotics or semantics:  (e.g. Explicit / Latent Semantic Analysis, Cluster analysis, Inverse Word-Frequency Analysis, HMM, etc.; e.g. Google search, Google Knowledge Graph, Evi, Siri, Wolfram Alpha(?), etc.)

However, lately I’ve been pondering a radical pragmatic expansion of Dedre Gentner’s “ad hoc categories.”  A popular stock example of an ad hoc category would be “Things you’d grab from your house in a fire.”  (Of course, life is always even more ad hoc:  “Things you’d grab from your house if there was a fire in the kitchen and you knew you had at least two minutes, but probably not five.”)

The radical pragmatic expansion is prompted by meditation on the social.

In every social system we engage, we generate an entire Gestalt, ad hoc, fabric of meaning (e.g. shared meanings, shared allusions, private codes, inside jokes, et al).  It’s as if there’s a pragmatic “terroir” to our everyday actions (e.g. My girlfriend appreciates the subtle inflections of what it means for me to do dishes these days, given my current projects.  On another level of granularity, every time I do dishes, I use an ad hoc cognitive map of which regularly-used bowls in our apartment fit inside other bowls).  In a social context, ad hoc categories are the rule, not the exception.  We live a social tapestry of ad hoc categories, an ad hoc cognitive tapestry.

To get what I mean by “pragmatics”, a concept as simple as J.L. Austin’s “performative utterance” suffices as an initial springboard: “By saying X, I hereby do Y.”  E.g. “By saying ‘I do,’ I hereby commit myself.” But Austin cared about “how to do things with words.”  Praxis approaches pragmatics from the action side rather than  the semantics side.  Thus, I envision a sort of socially-aware “performative activity” / “performative agency”:  when J does X in context Y, it means Z to M.  How to signify things with actions.

For General AI, then, one requires:

– Machine Learning
– Basic self-awareness (can represent and manipulate its own code) **not strictly necessary, but super cool…and perhaps easier to code.
– Social awareness & social self-awareness (awareness of oneself as a social agent among other social agents)
– Event ontology – Event matrix, Causality matrix, Pragmatic matrix (notion that every event derives meaning from social fabric)
– Rules for principled norm-keeping & norm-breaking
– Multi-modal & cross-modal representation paradigms (requires at least two sensors…e.g. audio, visual, text)
– Socially engaged experience
– Abstraction to rules from particular experiences, integrated with a
– Categorical ecology (continually updated “ontology”) derived from the social realm (others in this situation, do X, mean Y, etc.).

For the AI envisioned by the New Yorker article (let’s call it “Alligator-AI”) you need much less (for an initial prototype):

– Machine Learning
– A general pragmatic ontology (including all relevant facts about, say, an alligator…like its body plan)
– Precise grammatical parsing (proliferate potential grammatical models, then use a semantics parser / neural net to narrow down to a frame)
– The ability to invoke an answer-frame appropriate to the question-frame (Alligators can’t run 100M hurdles. Gazelles, on the other hand….)

…or we could just rest on our laurels with the accomplishment of AI in Twitterbots with the same satisfaction as if we’d just built the Great Pyramid.

An AI Direction for Today’s Giants


Google claims to have built “a web of things” to help drive its new Knowledge Graph.  From words to concepts and back?  Just as third-party researchers are using Google’s search algorithm to find biomarkers that cure cancer, Google is claiming to have “found concepts.”  What kind of concepts?  Google’s Norvig explains, “We consider each individual Wikipedia article as representing a concept (an entity or an idea), identified by its URL.” So Google’s using a Wikipedia-derived Explicit Semantic Analysis to achieve Semantic Search.  Novel.

Meanwhile, Bing is doing Social Search…using Facebook’s Social Graph.  Great for seeing what shoes or hotels or articles your friends like…and other “niche knowledges.”  Not so great outside your community’s niches, your communal “filter bubble.”  (Google ‘s Knowledge Graph tackles the problem from the other direction:  start with the most generic knowledge niches.  If you’re not searching for Da Vinci, you might not get Knowledge Graph.)

Then there’s Apple getting sued over SIRI for “overstating the abilities of its virtual personal assistant.”  Who’s not overstating these days?  Apple’s ad teams have tailored a message that achieves the precise amount of ambiguity to maximize sex appeal and plausible deniability.  The suits won’t stick.

Of course, everyone’s attempting to build brand loyalty so they can rake in dollars.


Deleuze & Guattari define philosophy as the creation of concepts.  I marvel at Google (+Wikipedia), Bing (+Facebook), and SIRI.  They are creating concepts–at least of a certain kind.  When you search for Da Vinci on Knowledge Graph and it groups renaissance painters together, this appears as abstraction, generalization.  When you search SIRI for Indian Food and she finds restaurants in your area, this is a form of pragmatic localization.  When you search Bing for fashion, and it tells you what your friends are wearing, it’s creating concepts in the space of social awareness.

Intelligence is metaphor all the way down.  All the services described above metaphorize in some nascent fashion. Lakoff and Johnson summarize:  “the essence of metaphor is understanding andexperiencing one kind of thing in terms of another.”[1]  General AI can be achieved by building out multi-dimensional metaphorizing algorithms.

Interestingly, SIRI, Google and Bing each assume a specific want (desire) in the user, and tailor their service accordingly.  SIRI assumes you don’t want abstract knowledge about the history or characteristics of Indian Food, but that you want to eat some, nearby, soon.  Google assumes you want general knowledge of Renaissance painters or other search topics.  Bing assumes you want to know what your friends and acquaintances think.

What if what you wantis general AI?  To achieve AI, concepts need semi-permeable membranes between them.  From Turner & Fauconnier’s “Conceptual Blending” to Ridley’s When Ideas Have Sex, ideas need room to breed.  As a first step in the right direction, I envision service that understands and generates metaphor.  At first, I want it to be capable of understanding why and when it might be apt to say  “Juliet is the sun,” “Man is a wolf to man,” or “You made your bed, now lie in it.”  For this, we need a Pragmatic Ontology, a subtle notion of what makes daily human actions meaningful.  Step two involves metaphorically extending the algorithms necessary for the first form of metaphorizing…finally achieving, for instance, an understanding of how identification with the hero of a story is a form of metaphor, how the move from string to a thing is metaphor, how the metaphorical process is ubiquitous.   That’s what I want to see built.

Afterward, I’ll be satisfied enough to navigate to a local Indian restaurant to contemplate Donatello’s brushwork like my friends do.


[1] Metaphors We Live By (1980), 5.

Slices & Traces

Slices & Traces
In graduate school, I once heard a medieval scholar remark that we now knew what Thomas Aquinas did on nearly every day of his life.  While such a feat is perhaps the wet dream of a medievalist, technology is reaching the point where the same may soon be true of me or you.

Historians compile numerous traces (any historical artifact that says “Thomas was here”) into slices (e.g. a biography).  In the digital age, what fascinates me is that numerous ready-made slices of our virtual lives may be compiled easily from databases that archive massive amounts of our personal digital traces.

I recently had the opportunity to experiment with Stanford’s Muse Project, which provides various analyses and visualizations based on personal email history and browser history (sentiment analysis, social group change over time, et al.)  Pros:  the program provides an interesting slice of one’s virtual self.  Cons:  the slice of my personal history recorded in my email database feels partial and one-sided.[1]  For another example, consider Facebook’s recently released “timeline.”  The history embedded in your Facebook timeline is yet another  slice of your personal history.  Each slice tells its own story, albeit an incomplete story.  A slice is just a slice.

What if, like an fMRI, we were able to capture and compile slice upon slice?[2]  Would the slices add up to a complete picture?[3]  What if one were to aggregate and integrate all the slices of one’s virtual life?  What if you had the tools to capture & integrate your own personal data from email history and browser history and add that to your data from social networks (Facebook, Twitter, LinkedIn), dating sites (eHarmony, Match, OKCupid), bookmarking sites (Stumbleupon, del.i.ci.ous, digg), music sites (Pandora, Grooveshark), movie sites (Netflix, Blockbuster), video sites (Youtube, Vimeo, Dailymotion), commerce sites (Amazon, ebay), banking sites (Mint, Quicken), location services (FourSquare, GPS), SMS history, and blog corpus?  What if, to that already rich textual and social data, one added perceptual data capture via webcams, haptics, and EEG/GSR?  What if one were to sift, analyze and integrate the data using textual algorithms (corpus linguistics, LSA, ESA, sentiment analysis), social algorithms (network & influence analysis), and perceptual algorithms — replete with visual recognition (facial, gesture, object, movement), audio recognition (voice, music, sound), and touch recognition (texture, heat, pressure)?  (See Figure 1)

Slicing & Tracing
Such a system of integrated personal data, collected en masse (even if anonymized), would prove invaluable to social scientists, historians, marketers, Big Brothers, and researchers of all ilks.  Although we’d never achieve Rankean history “wie es eigenlich gewesen ist,” (as it actually happened) through such a system, it represents a potential tool (among other tools we’re developing) that will soon get us closer to historical realism (or even hyper-realism).  What I’d like to discuss today is not the fine-grain detail we may someday achieve by integrating slices and traces.  Instead, today I want to talk about the slicing and tracing.

Suppose you mummify your information…all of your information.[4]  You’re still just a data-fossil in a museum exhibit a millennium from now (and if everyone gets mummified, probably a poorly-visited exhibit).  But your data doesn’t even make it to the museum without first undergoing some form of condensation and selection.[5]  I don’t care how much you love your grandpa, you’re not going use your entire life to watch a second-by-second video of his entire life.

Before the digital age, condensation and selection happened naturally in places like family photo-albums and dinner-table stories.  These human-sized brain-morsels could be chewed and digested comfortably.  In the digital age, a deluge of data makes you cross-eyed and bloated while historians babble about Kim Kardashian and advertisers hypnotize you with french fries.  As we speak, historiography is being asked to develop some frighteningly powerful tools to condense uncompressed information, select salient aspects, and present us with soundbites (Think Robin Williams in The Final Cut).  Too much data is the first challenge facing next-gen story-telling gurus.

But too much information (TMI) is merely the prima facie challenge.  The real challenge, as I see it, is not TMI but too little intelligence.  I’ve often said that “after the Information Age comes the Intelligence Age.”  I want to see a generation of “intelligence scientists” rise up to replace today’s “information scientists.”  Would you rather preserve your intelligence (creativity, intuition) or your information?[6]  What would that even look like?



In the spirit of Aristotle and Nietzsche, I’ve nicknamed the data-integration algorithm-hub “VirtuAlly.”


[1] Also, the sentiment analysis engine in Muse is amateurish.

[2] The current discussion assumes that the capture, aggregation and integration of data would be for private and personal use only.  With increasing sousveillance, each of us may be able to compile an increasingly complete picture of our personal histories.  As technologies for personal data capture, aggregation, and integration progress, the following philosophical stance will also snowball in importance:  an individual’s data is his or her inalienable property.

[3] Temporality is a dimension common to each of the following data slices.  Each slice is like a layer of bedrock, and data archived in each aggregates many fossilized traces of one’s virtual life.  Time-stamps are common in each digital trace, making chronological sorting easy.  Who will standardize the aggregation and integration of these slices, as we once standardized the USB port?

[4] Lifenaut.com offers a digital (and biological!) time-capsule for would-be immortality-seekers.

[5] By condensation I mean something like summary, and by selection I roughly mean meme-discrimination.

[6] Arguably, neither is any good without the other, so my answer is “both.”



What story does your data tell?

New York Times data analyst on visualization


Creates a slice of your personal history using your EMAIL, with capabilities for BROWSER HISTORY (best in Firefox).  The program runs securely on your local machine, so there’s no chance your data will make it to the cloud.  I’ve experimented with this program with interesting results.

Capture your LOCATION DATA.

Capture BROWSER HISTORY.  a friend of mine built this.




Interactive time capsule, digital self-storage space (digital locker)

Also, they store your DNA…free (suggested donation $399)

from high mountains

“on the mountains of truth you can never climb in vain: either you will reach a point higher up today, or you will be training your powers so that you will be able to climb higher tomorrow.” – nietzsche

i wake early this morning to a thin mist–the kind that promises to burn off in a few moments.  i know immediately what i have to do.  foregoing my normal coffee-routine, i don workout gear and hiking shoes, grab my camera, and start up the mountain.   like everything around me, my lungs and skin drink in the steam, silently thanking me for this trot through the vapor.  along the way, the world dances into my irises with a surreal, ever-shifting blend of golden light and mist.   trees drink and drip.  birds chirp, aflutter in the underbrush.  all of los angeles has its head in the low-lying clouds.

up, up i hike, up to the roof of the fog.  suddenly, i’m above it all.  everything around me is clear blue sky and piercing yellow light.  below, all is fog.  truth is like weather–everywhere undeniable, everywhere local.  i, a lone hiker, somehow move between these two worlds this morning.  i ascend.

i summit.  here i am, alone, atop the mountain.   from here, i put my question to the clouds.  a thin line of brown lies above the white blanket of moisture, slouching westward.  brown tinges this rolling sea of mist, complicating the aesthetic.  a piece of burbank is sunlit, just as the drifting cloud-cover circumvents the mountain before reuniting past it.  in one pocket of sun, a long procession of vehicles already moves down the superhighway.  downtown and hollywood are shrouded.  greater los angeles has only a few pockets of clarity.

i meditate on the history of humanity and our relation to heights, mountains, ascent.  our ancestors knew the value of vantage-points.  treetops are one thing, mountaintops another.  up here, one can see for miles around.   ancient metaphors link seeing to thinking, visual acuity with mental acuity.  seeing is believing.  see what i’m saying?  this mountaintop-experience inspires me to continue to weigh visionaries on the value-creativity of their vision.  this morning, for a brief moment, i am above it all.  time to descend once more.

Your Piece of the Pie in the Sky: A Half-Baked Thought Experiment (or) World Peace and the Seven Sextillion Dollar Rock

Who wants to be a billionaire? What if I told you that tomorrow, you could be? All you’d have to do is talk to your neighbor—who is also poised to become a billionaire—and agree to claim your billion dollars together, at the same time. Sound too good to be true? It isn’t.

Scientists have long hypothesized about the value of precious metals and other resources in space rocks. Suppose you were to learn of the discovery of a space rock valued at seven sextillion dollars. Suppose this rock belonged equally to all the people of the world. With world population approaching seven billion,i your share would be one billion dollars.

In fact, this rock exists. It’s the moon. The moon holds an abundance of helium-3, iron, and titanium, among other resources. The helium-3 alone is worth 286 quadrillion dollars.ii (Your share of the Helium-3 comes out to about $42,000,000.)

We are poised to become a space-faring species. Space law, as it stands today, prevents countries from owning and exploiting space resources for national gain.iii A strong argument can be made from existing space law that the moon and other space objects are the common heritage of humanity.iv As such, they belong to you and me. We’re billionaires already. So is everyone on the planet. And this wealth is equally distributed. And this wealth is poised to grow indefinitely. As our species continues to explore space, we will continue to share equally in its ever-growing wealth. If we act now.

You’re not a billionaire today. But you can be tomorrow. You must claim it. Actually, we must claim it. All of us must claim our heritage: all at once, together, at the same time. What happens when billions of people worldwide use cellphones and laptops to demand the limitless wealth that is our birthright? What happens when we create a true digital democracy and stand up to claim our heritage? What happens when each of the seven billion people on the planet becomes a billionaire?

Even now, nations are racing to the moon to discover its resources. Although space law prevents nations from owning and exploiting space resources for gain, it does not prevent private individuals and companies from doing so. Private individuals and private companies are already joining the race to hoard as much as they can for themselves. Already in the United States, three individuals contend that they have found a legal loophole that has allowed them to lay claim to 95% of the moon’s mineral rights.v Is it theirs? Or is it all of ours? We can allow the greed and inequity of the current economic and legal systems to spread into the solar system as we become space-farers. Alternatively, we can stand up together, all at once, to stake an equal claim to prosperity for ourselves and every person across the globe. Space is the common heritage of humanity. If we all claim this heritage together, we will each wake up tomorrow to unimaginable abundance for ourselves and our posterity.

Space-Farers, Unite!





“Reserves of helium-3 on the moon are in the order of a million tons, according to some estimates, and just 25 tons could serve to power the European Union and United States for a year.” (At current energy-consumption rates, that would power the world for 40,000 years, for those of you who hate math.) (http://www.moondaily.com/reports/Moon_potential_goldmine_of_natural_resources_999.html)

Energy use per capita – Primary energy use (before transformation to other end-use fuels) in kilograms of oil equivalent, per capita. 2006 = 1818kg (world)

Crude Oil – $82 / barrel

CALCULATION: Value of Lunar He3
1818 kg/person (per yr) [2006]
1.136363636 L/kg crude
2065.909091 L/person
0.006289308 barrels/L
12.99313894 barrels/person
82 $/barrel [march 2010]
1065.437393 $/person/yr
40000 yrs/lunar He3
$ 42,617,495.71 $/person of He3
6700000000 persons
$285,537,221,269,297,000.00 $ of He3


i Google global population, as of publication.

ii See Tables 1 and 2.

iii “Outer space, including the moon and other celestial bodies, is not subject to national appropriation by claim of sovereignty, by means of use or occupation, or by any other means.”  United Nations, Outer Space Treaty, Article 2. (Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and Other Celestial Bodies, Jan. 27, 1967, 18 U.S.T. 2410, 610 U.N.T.S. 205).

iv “The exploration and use of outer space, including the moon and other celestial bodies, shall be carried out for the benefit and in the interests of all countries, irrespective of their degree of economic or scientific development, and shall be the province of all mankind…” United Nations, Outer Space Treaty, Article 1. (Treaty on Principles Governing the Activities of States in the Exploration and Use of Outer Space, Including the Moon and Other Celestial Bodies, Jan. 27, 1967, 18 U.S.T. 2410, 610 U.N.T.S. 205).

v Here’s the archived version.