This morning, I read a New Yorker article on A.I. entitled “Why Can’t My Computer Understand Me?” It’s worth a read. The article’s protagonist, Hector Levesque, denounces the Turing Test as too easy to scam.
I agree…with the proviso that, in the development of useful expert systems, we’ve reached a historic plateau in which, for business purposes, a useful metric is: “Time to Turing-Complete” (TTTC).
My thinking on general AI still orbits a praxis-to-pragmatics approach, as opposed to development of highly specific algorithms that remain in the realm of mere semiotics or semantics: (e.g. Explicit / Latent Semantic Analysis, Cluster analysis, Inverse Word-Frequency Analysis, HMM, etc.; e.g. Google search, Google Knowledge Graph, Evi, Siri, Wolfram Alpha(?), etc.)
However, lately I’ve been pondering a radical pragmatic expansion of Dedre Gentner’s “ad hoc categories.” A popular stock example of an ad hoc category would be “Things you’d grab from your house in a fire.” (Of course, life is always even more ad hoc: “Things you’d grab from your house if there was a fire in the kitchen and you knew you had at least two minutes, but probably not five.”)
The radical pragmatic expansion is prompted by meditation on the social.
In every social system we engage, we generate an entire Gestalt, ad hoc, fabric of meaning (e.g. shared meanings, shared allusions, private codes, inside jokes, et al). It’s as if there’s a pragmatic “terroir” to our everyday actions (e.g. My girlfriend appreciates the subtle inflections of what it means for me to do dishes these days, given my current projects. On another level of granularity, every time I do dishes, I use an ad hoc cognitive map of which regularly-used bowls in our apartment fit inside other bowls). In a social context, ad hoc categories are the rule, not the exception. We live a social tapestry of ad hoc categories, an ad hoc cognitive tapestry.
To get what I mean by “pragmatics”, a concept as simple as J.L. Austin’s “performative utterance” suffices as an initial springboard: “By saying X, I hereby do Y.” E.g. “By saying ‘I do,’ I hereby commit myself.” But Austin cared about “how to do things with words.” Praxis approaches pragmatics from the action side rather than the semantics side. Thus, I envision a sort of socially-aware “performative activity” / “performative agency”: when J does X in context Y, it means Z to M. How to signify things with actions.
For General AI, then, one requires:
– Machine Learning
– Basic self-awareness (can represent and manipulate its own code) **not strictly necessary, but super cool…and perhaps easier to code.
– Social awareness & social self-awareness (awareness of oneself as a social agent among other social agents)
– Event ontology – Event matrix, Causality matrix, Pragmatic matrix (notion that every event derives meaning from social fabric)
– Rules for principled norm-keeping & norm-breaking
– Multi-modal & cross-modal representation paradigms (requires at least two sensors…e.g. audio, visual, text)
– Socially engaged experience
– Abstraction to rules from particular experiences, integrated with a
– Categorical ecology (continually updated “ontology”) derived from the social realm (others in this situation, do X, mean Y, etc.).
For the AI envisioned by the New Yorker article (let’s call it “Alligator-AI”) you need much less (for an initial prototype):
– Machine Learning
– A general pragmatic ontology (including all relevant facts about, say, an alligator…like its body plan)
– Precise grammatical parsing (proliferate potential grammatical models, then use a semantics parser / neural net to narrow down to a frame)
– The ability to invoke an answer-frame appropriate to the question-frame (Alligators can’t run 100M hurdles. Gazelles, on the other hand….)
…or we could just rest on our laurels with the accomplishment of AI in Twitterbots with the same satisfaction as if we’d just built the Great Pyramid.