Architectures and Language Instincts

This post is inspired by a recent book and article by Vyvyan Evans declaring the death of the Pinker-Chomsky notion of a language instinct.  I have written previous posts on what I call the Comsky-Hinton axis of innate-knowledge vs. general learning and I have declared that Hinton (general learning) is winning.  I still believe that.  However, I also believe that the learning architecture matters and that finding really powerful general purpose architectures is nontrivial.  It seems harder than just designing neural architectures emulating Turing machines (or random access machines or functional programs).  We should not ignore the possibility that the structure of language is related to the structure of thought and that the structure of thought is based on a nontrivial innate learning architecture.

A basic issue discussed in Evan’s article is the fact that sophisticated language and thought seems to be restricted to humans and seems to have taken at least 500 million years since the development of “simple” functions like vision. Presumably vision has bee based on learning all along.  I put no weight on the claim that other species have similar functions — humans are clearly qualitatively smarter than starlings (or any other animal). Evan’s seems to accept that this is a mystery and does not seem to have a good solution. He attributes it speculatively to socialization and the need to coordinate behaviors. However, there are many social species and only humans have this qualitatively different level of intelligence. Although we have only a sample of one species, this level of intelligence has perfect correlation with the existence of language.

At least one interpretation of the very late and sudden appearance of language is that it arose as a kind of phase transition in gradual improvements in learning.  A phase transition could occur when learning becomes adequate for the cultural transmission of software (language).  In this scenario thought precedes linguistic communication and is based on the internal data structures of a sophisticated neural architecture.  As learning becomes strong enough, it becomes capable of learning to interpret sophisticated communication signals.  This leads to the cultural transmission of “ideas” — concepts denoted by words such as “have” and “get”.  A phase transition then occurs as a culturally transmitted language (system of concepts) takes hold.  A deeper understanding of the world then arises from the cultural evolution of conceptual systems.  The value of absorbing cultural ideas then places  increased selective pressure on the learning system itself.

But the question remains of what general learning architecture is required. After all, it took 500 million years for the phase transition to occur.  I believe there are clues to the structure of the underlying architecture in language itself.  To the extent that this is true, it may not be meaningful to distinguish a “language instinct” from an innate “learning architecture”.  Maybe the innate architecture involves entities and relations, a kind of database architecture,  providing a general (universal) substrate of representation, computation, and learning.  Maybe such an architecture would provide a unification of Chomsky and Hinton …

This entry was posted in Uncategorized. Bookmark the permalink.

3 Responses to Architectures and Language Instincts

  1. Great post! You argue that it is not meaningful to distinguish between a “language instinct” from an *innate* “learning architecture”. It means that there is no “general learning” in the case of language, since for some architecture the learning would be more successful. And since learning language share common properties with other learning tasks, I deduce, from what you say, that there is no “general learning” whatsoever. Actually the “learning architecture” is a kind of prior or innate knowledge.

    • McAllester says:

      I still believe in universal priors — learning architectures that can learn any efficiently computable function. But high level languages seem to make writing (or learning?) software easier. The structure of language would suggest that the underlying human learning architecture is closer to declarative logic than to machine assembly code. Logic itself seems close to being a universal prior and yet is a very sophisticated architecture.

  2. Mark Johnson says:

    Clearly there is something that differentiates our intelligence from that of other species, including other primates. Chomsky thinks it might be something simple but general, like recursion. I think it’s fine to look for silver bullets — if you don’t look, you won’t find them — but there’s a reasonable chance that no single silver bullet exists.

Leave a comment