Pragmatics in Praxis

Image

This morning, I read a New Yorker article on A.I. entitled “Why Can’t My Computer Understand Me?”  It’s worth a read.  The article’s protagonist, Hector Levesque, denounces the Turing Test as too easy to scam.

I agree…with the proviso that, in the development of useful expert systems, we’ve reached a historic plateau in which, for business purposes, a useful metric is:  “Time to Turing-Complete” (TTTC).

My thinking on general AI still orbits a praxis-to-pragmatics approach, as opposed to development of highly specific algorithms that remain in the realm of mere semiotics or semantics:  (e.g. Explicit / Latent Semantic Analysis, Cluster analysis, Inverse Word-Frequency Analysis, HMM, etc.; e.g. Google search, Google Knowledge Graph, Evi, Siri, Wolfram Alpha(?), etc.)

However, lately I’ve been pondering a radical pragmatic expansion of Dedre Gentner’s “ad hoc categories.”  A popular stock example of an ad hoc category would be “Things you’d grab from your house in a fire.”  (Of course, life is always even more ad hoc:  “Things you’d grab from your house if there was a fire in the kitchen and you knew you had at least two minutes, but probably not five.”)

The radical pragmatic expansion is prompted by meditation on the social.

In every social system we engage, we generate an entire Gestalt, ad hoc, fabric of meaning (e.g. shared meanings, shared allusions, private codes, inside jokes, et al).  It’s as if there’s a pragmatic “terroir” to our everyday actions (e.g. My girlfriend appreciates the subtle inflections of what it means for me to do dishes these days, given my current projects.  On another level of granularity, every time I do dishes, I use an ad hoc cognitive map of which regularly-used bowls in our apartment fit inside other bowls).  In a social context, ad hoc categories are the rule, not the exception.  We live a social tapestry of ad hoc categories, an ad hoc cognitive tapestry.

To get what I mean by “pragmatics”, a concept as simple as J.L. Austin’s “performative utterance” suffices as an initial springboard: “By saying X, I hereby do Y.”  E.g. “By saying ‘I do,’ I hereby commit myself.” But Austin cared about “how to do things with words.”  Praxis approaches pragmatics from the action side rather than  the semantics side.  Thus, I envision a sort of socially-aware “performative activity” / “performative agency”:  when J does X in context Y, it means Z to M.  How to signify things with actions.

For General AI, then, one requires:

– Machine Learning
– Basic self-awareness (can represent and manipulate its own code) **not strictly necessary, but super cool…and perhaps easier to code.
– Social awareness & social self-awareness (awareness of oneself as a social agent among other social agents)
– Event ontology – Event matrix, Causality matrix, Pragmatic matrix (notion that every event derives meaning from social fabric)
– Rules for principled norm-keeping & norm-breaking
– Multi-modal & cross-modal representation paradigms (requires at least two sensors…e.g. audio, visual, text)
– Socially engaged experience
– Abstraction to rules from particular experiences, integrated with a
– Categorical ecology (continually updated “ontology”) derived from the social realm (others in this situation, do X, mean Y, etc.).

For the AI envisioned by the New Yorker article (let’s call it “Alligator-AI”) you need much less (for an initial prototype):

– Machine Learning
– A general pragmatic ontology (including all relevant facts about, say, an alligator…like its body plan)
– Precise grammatical parsing (proliferate potential grammatical models, then use a semantics parser / neural net to narrow down to a frame)
– The ability to invoke an answer-frame appropriate to the question-frame (Alligators can’t run 100M hurdles. Gazelles, on the other hand….)

…or we could just rest on our laurels with the accomplishment of AI in Twitterbots with the same satisfaction as if we’d just built the Great Pyramid.

An AI Direction for Today’s Giants

Today

Google claims to have built “a web of things” to help drive its new Knowledge Graph.  From words to concepts and back?  Just as third-party researchers are using Google’s search algorithm to find biomarkers that cure cancer, Google is claiming to have “found concepts.”  What kind of concepts?  Google’s Norvig explains, “We consider each individual Wikipedia article as representing a concept (an entity or an idea), identified by its URL.” So Google’s using a Wikipedia-derived Explicit Semantic Analysis to achieve Semantic Search.  Novel.

Meanwhile, Bing is doing Social Search…using Facebook’s Social Graph.  Great for seeing what shoes or hotels or articles your friends like…and other “niche knowledges.”  Not so great outside your community’s niches, your communal “filter bubble.”  (Google ‘s Knowledge Graph tackles the problem from the other direction:  start with the most generic knowledge niches.  If you’re not searching for Da Vinci, you might not get Knowledge Graph.)

Then there’s Apple getting sued over SIRI for “overstating the abilities of its virtual personal assistant.”  Who’s not overstating these days?  Apple’s ad teams have tailored a message that achieves the precise amount of ambiguity to maximize sex appeal and plausible deniability.  The suits won’t stick.

Of course, everyone’s attempting to build brand loyalty so they can rake in dollars.

Tomorrow

Deleuze & Guattari define philosophy as the creation of concepts.  I marvel at Google (+Wikipedia), Bing (+Facebook), and SIRI.  They are creating concepts–at least of a certain kind.  When you search for Da Vinci on Knowledge Graph and it groups renaissance painters together, this appears as abstraction, generalization.  When you search SIRI for Indian Food and she finds restaurants in your area, this is a form of pragmatic localization.  When you search Bing for fashion, and it tells you what your friends are wearing, it’s creating concepts in the space of social awareness.

Intelligence is metaphor all the way down.  All the services described above metaphorize in some nascent fashion. Lakoff and Johnson summarize:  “the essence of metaphor is understanding andexperiencing one kind of thing in terms of another.”[1]  General AI can be achieved by building out multi-dimensional metaphorizing algorithms.

Interestingly, SIRI, Google and Bing each assume a specific want (desire) in the user, and tailor their service accordingly.  SIRI assumes you don’t want abstract knowledge about the history or characteristics of Indian Food, but that you want to eat some, nearby, soon.  Google assumes you want general knowledge of Renaissance painters or other search topics.  Bing assumes you want to know what your friends and acquaintances think.

What if what you wantis general AI?  To achieve AI, concepts need semi-permeable membranes between them.  From Turner & Fauconnier’s “Conceptual Blending” to Ridley’s When Ideas Have Sex, ideas need room to breed.  As a first step in the right direction, I envision service that understands and generates metaphor.  At first, I want it to be capable of understanding why and when it might be apt to say  “Juliet is the sun,” “Man is a wolf to man,” or “You made your bed, now lie in it.”  For this, we need a Pragmatic Ontology, a subtle notion of what makes daily human actions meaningful.  Step two involves metaphorically extending the algorithms necessary for the first form of metaphorizing…finally achieving, for instance, an understanding of how identification with the hero of a story is a form of metaphor, how the move from string to a thing is metaphor, how the metaphorical process is ubiquitous.   That’s what I want to see built.

Afterward, I’ll be satisfied enough to navigate to a local Indian restaurant to contemplate Donatello’s brushwork like my friends do.

______

[1] Metaphors We Live By (1980), 5.