The Plausibility of Near-Term Machine Sentience.

When should we expect “operational sentience” — the point where the most effective way to interact with a machine is to assume it is sentient — to assume that it understands what we tell it.  I want to make an argument that near-term machine sentience in this sense is plausible where near-term means, say, ten years.

Deep learning has already dramatically improved natural language processing. Translation, question answering, parsing, and linking have all improved. A fundamental question is whether the recent advances in translation and linking provide evidence that we are getting close to “understanding”. I do not want to argue that we are getting close,
but rather just that we don’t know and that near-term sentience is “plausible”.

My case is based on the idea that paraphrase, entailment, linking, and structuring may be all that is needed. Paraphrase is closely related to translation — saying the same thing a different way or in a different language.  There has been great progress here. Entailment is determining if one statement implies another.  Classical logic was developed as model of entailment. But natural language entailment is far to slippery to be modeled by any direct application of formal logic. I interpret the progress in deep learning applied to NLP question answering as evidence for progress in entailment. Entailment is also closely related to paraphrase (no paraphrase is precisely meaning preserving) and the progress in translation seems related to potential progress in entailment. “Linking” means tagging natural language phrases with the database entries that the phrase is referring to.  The database can be taken to be freebase or wikidata or any knowledge graph. This is related to the philosophical problem of “reference” – what is our language referring to. But simply tagging phrases with database entries, such as “Facebook” or “the US constitution“, does not seem philosophically mysterious. I am using the term “structuring” to mean knowledge base population (KBP) – populating a knowledge base from unstructured text. Extracting entities and relationships (building a database) from unstructured text does not seem philosophically mysterious. It seems plausible to me that paraphrase, entailment, linking, and KBP will all see very significant near-term advances based on deep learning. The notions of “database” and “inference rule” (as in datalog) presumably have to be blended with the distributed representations of deep learning and integrated into “linking” and “structuring”. This seems related to memory networks with large memories. But I will save that for another post.

The plausibility of near-term machine sentience is supported by the plausibility that
language understanding is essentially unrelated to, and independent of, perception and action other than inputing and outputting language itself. There is a lot of language data out there. I have written previous blog posts against the idea of “grounding” the semantics of natural language in non-linguistic perception and action or in the physical position of material in space.

Average level human natural language understanding may prove to be easier than, say, average level human vision or physical coordination. There has been evolutionary
pressure on vision and coordination much longer than there has been evolutionary pressure on NLP understanding. For me the main question is how close are we to effective paraphrase, entialment, linking and structuring.  NLP researchers are perhaps the best AI practitioners to comment this blog post. I believe that Gene Charniak, a pioneer of machine parsing, believes that machine NLP understanding is at least a hundred years off. But I am more interested in laying out concrete arguments, one way or the other, than in taking opinion polls. Deep learning may have the ability to automatically handle the hundreds of thousands of linguistic phenomenon that seem to exist in, say, English. People learn language. Is there some reasoned argument that this cannot work within a decade?

Maybe at some point I will write a longer post outlining what current work seems to me to be on the path to machine sentience.

This entry was posted in Uncategorized. Bookmark the permalink.

2 Responses to The Plausibility of Near-Term Machine Sentience.

  1. Pingback: The Role of Theory in Deep Learning | Machine Thoughts

  2. Heinz Hemken says:

    The crux of this essay is in the third to the last paragraph: the conjecture that full language understanding up to and including sentience, can be achieved in complete isolation of the embodiment and behavior that gave it birth historically. This is effectively the antithesis of embodied cognition which usually implies that a whole-organism sensory-motor loop behaving dynamically in the physical world is required for anything resembling sentience. It may even be required for deep understanding of human language, although that would be a murky claim. Where along this spectrum will systems that can be reasonably claimed to be AGI appear? That, to me, is the most fascinating question.

Leave a comment