The Plausibility of Near-Term Machine Sentience.

When should we expect “operational sentience” — the point where the most effective way to interact with a machine is to assume it is sentient — to assume that it understands what we tell it.  I want to make an argument that near-term machine sentience in this sense is plausible where near-term means, say, ten years.

Deep learning has already dramatically improved natural language processing. Translation, question answering, parsing, and linking have all improved. A fundamental question is whether the recent advances in translation and linking provide evidence that we are getting close to “understanding”. I do not want to argue that we are getting close,
but rather just that we don’t know and that near-term sentience is “plausible”.

My case is based on the idea that paraphrase, entailment, linking, and structuring may be all that is needed. Paraphrase is closely related to translation — saying the same thing a different way or in a different language.  There has been great progress here. Entailment is determining if one statement implies another.  Classical logic was developed as model of entailment. But natural language entailment is far to slippery to be modeled by any direct application of formal logic. I interpret the progress in deep learning applied to NLP question answering as evidence for progress in entailment. Entailment is also closely related to paraphrase (no paraphrase is precisely meaning preserving) and the progress in translation seems related to potential progress in entailment. “Linking” means tagging natural language phrases with the database entries that the phrase is referring to.  The database can be taken to be freebase or wikidata or any knowledge graph. This is related to the philosophical problem of “reference” – what is our language referring to. But simply tagging phrases with database entries, such as “Facebook” or “the US constitution“, does not seem philosophically mysterious. I am using the term “structuring” to mean knowledge base population (KBP) – populating a knowledge base from unstructured text. Extracting entities and relationships (building a database) from unstructured text does not seem philosophically mysterious. It seems plausible to me that paraphrase, entailment, linking, and KBP will all see very significant near-term advances based on deep learning. The notions of “database” and “inference rule” (as in datalog) presumably have to be blended with the distributed representations of deep learning and integrated into “linking” and “structuring”. This seems related to memory networks with large memories. But I will save that for another post.

The plausibility of near-term machine sentience is supported by the plausibility that
language understanding is essentially unrelated to, and independent of, perception and action other than inputing and outputting language itself. There is a lot of language data out there. I have written previous blog posts against the idea of “grounding” the semantics of natural language in non-linguistic perception and action or in the physical position of material in space.

Average level human natural language understanding may prove to be easier than, say, average level human vision or physical coordination. There has been evolutionary
pressure on vision and coordination much longer than there has been evolutionary pressure on NLP understanding. For me the main question is how close are we to effective paraphrase, entialment, linking and structuring.  NLP researchers are perhaps the best AI practitioners to comment this blog post. I believe that Gene Charniak, a pioneer of machine parsing, believes that machine NLP understanding is at least a hundred years off. But I am more interested in laying out concrete arguments, one way or the other, than in taking opinion polls. Deep learning may have the ability to automatically handle the hundreds of thousands of linguistic phenomenon that seem to exist in, say, English. People learn language. Is there some reasoned argument that this cannot work within a decade?

Maybe at some point I will write a longer post outlining what current work seems to me to be on the path to machine sentience.

Posted in Uncategorized | Leave a comment

Formalism, Platonism and Mentalese

This is a sequel to an earlier post on Tarski and Mentalese.  I am writing this sequel for two reasons. First, I just posted a new version of my treatment of type theory which focuses on “naive semantics”. I want to explain here what that means. Second, after many attempts to get formalists to accept Platonic proofs I have decided that it is best to approach the issue from the formalist point of view. In this new approach I present Platonism as just a translation from a formal language (such as a programming language or mathematical logic) to some symbolic internal mentalese. Human mathematical thought can presumably be modeled, to some extent at least, as symbolic computation involving expressions of mentalese.  Formalists will accept the idea of a symbolic language of thought but some deep learning people resist the idea that symbolic computation is relevant to any form of AI.  I will avoid that latter discussion here and assume that a mentalese of mathematics exists.

To understand the issues of formalism vs. Platonism and the relationship of both to semantics it is useful to consider an example. In type theory we have dependent pair types. These are written fairly obscurely as \Sigma_{x:\sigma} \;\tau[x].  This expression denotes a type (think class). An instance of this type (or class) is a pair (x,y) where  x is an instance of the type \sigma and y is an instance of the type \tau[x] where \tau[x] is a type (class) whose definition involves the value x.  To make this more concrete consider the following example.

DiGraph = \Sigma_{s:set}\;(s \times s \rightarrow \mathbf{Bool})

This is the type (or class) of directed graphs where we are taking a directed graph to be a pair (N,P) where N is a set of “nodes” and P is an “edge predicate” on the nodes which is true of two nodes if there is a directed edge between them.

The above paragraph defines the notation \Sigma_{x:\sigma}\;\tau[x] in a way that would be sufficient when introducing notation in a mathematics class.   Mathematical notation is generally defined before it is used.  Above we define the notation \Sigma_{x:\sigma}\;\tau[x] in much the same way that any mathematical notation is defined in the practice of mathematics. A simpler example would be to introduce the symbol \cup by saying that for sets s and u write s \cup u to denote the union of the sets s and u.  This style of definition provides a translation of formal notation into the natural language (such as English) which is the language of mathematical discourse in practice.

But logicians are not satisfied with this style of definition.  They prefer a more formal treatment known as Tarskian compositional semantics.  My previous post introduces this by treating the semantics of arithmetic expressions.  We introduce a value function {\cal V}[e]\rho where e is an arithmetic expression and \rho is a variable interpretation assigning a value to the variables occurring in e.  Now consider how notation is defined in mathematics.  We might say “we write n! for the the product of the natural numbers from 1 to n”.  We can write a compositional semantics specification of the meaning of n! as follows.

V[e!]\rho \equiv \prod_{i=1}^{V[e]\rho}\;i

Here V[w]\rho is the notation for the value of e under variable interpretation \rho.  This looks more explicitly like a translation between formal notations. However, this notation assumes that the product notation is already understood — can be reasoned about in internal mentalese.  This notation can still be interpreted as a translation from an external notation to mentalese.

In type theory, as in most programming languages, variables are typed.  A math class might start with the sentence “Let B be a Banach space.”.  In type theory this is a declaration of the variable B as being of type “Banach space”.  A set of variable declarations and assumptions constraining the variable values is called a “context”. Contexts are often denoted by \Gamma.  In my work on type theory I subscript the value function with a context declaring the types of the free variables that can occur in the expression. For example we might have

V_{G:\mathbf{DiGraph}}\;[e[G]]\rho

Here \rho must assign a value to the variable G and that value must be a directed graph.  The type of directed graphs can be expressed as a dependent pair type as described above.

The phrase “naive semantics”, or more specifically “naive type theory”, involves assigning the expressions  their naive meaning.  For example we have

V_\Gamma[\Sigma_{x:\sigma}\;\tau[x]]\rho =\{(a,b),\;a \in V_\Gamma[\sigma]\rho,\;b \in V_{\Gamma;\;x:\sigma}[\tau[x]]\rho[x \leftarrow a]\}

This is just the first definition of the dependent pair type notation given above expressed within a Tarskian compositional semantics.  The above definition can be viewed as simply providing a translation of a formal expression into the notation of practical mathematical discourse and hence as a translation of formal expressions into the language of mentalese.

Of course when push comes to shove mathematics grounds out in sets. The notion of “set” is taken to be primitive and is not defined in terms of more basic notions.  We have learned, however, to be precise about the properties of sets using the axioms of ZFC.  In the modern understanding of sets (since about 1925) we distinguish sets from classes where classes are collections too large to be sets such as the collection of all groups or the collection of all topological spaces.  Naive type theory starts from this (sophisticated) notion of sets and classes and then gives all the other expressions of type theory their naive meanings.  This is no different from the introduction of notation in any particular area of mathematics except that type theory is about all the areas of mathematics — it is part of the foundations of mathematics.

Naive type theory is almost trivial.  However, considerable effort is required to handle the notion of isomorphism within naive type theory.  Handling isomorphism within naive type theory is the main point of my work in this area.

Posted in Uncategorized | Leave a comment

Comprehension Based Language Modeling

One of the holy grails of the modern deep learning community is to develop effective methods for unsupervised learning.  I have always held out hope that the semantics of English could be learned from raw unlabeled text.  The plausibility of a statement should affect the probability of that statement. It would seem that a perfect language model — one approaching the true perplexity or entropy of English — must incorporate semantics.  For this reason I read with great interest the recent dramatic improvement in language modeling achieved by Jozefowicz et. al. at Google Brain.  They report a perplexity of about 30 on Google’s One Billion Word Benchmark.  Perplexities below 60 have been very difficult to achieve.

At TTIC we have been working on machine comprehension as another direction for the acquisition of semantics. We created a “Who Did What” benchmark for reading comprehension based on the LDC Gigaword Corpus.  I will discuss reading comprehension a bit more below, but I first want to focus on the nature of news corpora.  We found that the Gigaword corpus contained multiple articles on the same events.  We exploited this fact in the construction of our Who-did-What dataset.  I was curious how the Billion Word Benchmark handled this multiplicity.  Presumably the training data contained information about the same entities and events that occur in the test sentences. What is the entropy of a statement under the listener’s probability distribution when the listener knows (the semantic content of) what the speaker is going to say?  To make this issue clear, here are some examples of training and test sentences from Google’s One Billion Word Corpus:

Train: Saudi Arabia on Wednesday decided to drop the widely used West Texas Intermediate oil contract as the benchmark for pricing its oil.

Test: Saudi Arabia , the world ‘s largest oil exporter , in 2009 dropped the widely used WTI oil contract as the benchmark for pricing its oil to customers in the US.

Train: Bobby Salcedo was re-elected in November to a second term on the board of the El Monte City School District.

Test: Bobby Salcedo was first elected to the school board in 2004 and was re-elected in November.

These examples were found by running the Lucene information retrieval system on the training data using the test sentence as the query (thanks to Hai Wang).  These examples suggest that language modeling is related to reading comprehension.  Reading comprehension is the task of answering a multiple choice question about a passage involving entities and events not present in general structured knowledge sources. Recently several large scale reading comprehension benchmarks have been created with cloze-style questions — questions formed by deleting a word or phrase from a corpus sentence or article summary.  Cloze-style reading comprehension benchmarks include the CNN/Daily Mail benchmark, the Children’s Book Test, and our Who-did-What benchmark mentioned above.  There is also a recently constructed LAMBADA dataset which is intended as a language modeling benchmark but which can be approached most effectively as a reading comprehension task (Chu et. al.).  I predict a convergence of reading comprehension and language modeling — comprehension based language modeling.

For comprehension based language modeling to make sense one should have training data describing the same entities and events as those occurring in the test data.  It might be good to measure the entropy (or perplexity) of just the content words, or even just named entities.  But one should of course avoid including the test sentences in the training data. It turns out that Google’s Billion Word Benchmark has a problem in this regard.  We found the following examples:

Train: Al Qaeda deputy leader Ayman al-Zawahri called on Muslims in a new audiotape released Monday to strike Jewish and American targets in revenge for Israel ‘s offensive in the Gaza Strip earlier this month.

Test: Al-Qaida deputy leader Ayman al-Zawahri called on Muslims in a new audiotape released Monday to strike Jewish and American targets in revenge for Israel ‘s recent offensive in the Gaza Strip.

Train: RBS shares have only recently risen past the average level paid by the Government , but Lloyds shares are still low despite a recent banking sector rally.

Test: RBS shares have only recently risen past the average level paid by the government , but Lloyds shares are still languishing despite the recent banking sector rally .

Train: WASHINGTON – AIDS patients should have a genetic test before treatment with GlaxoSmithKline Plc ‘s drug Ziagen to see whether they face a higher risk of a potentially fatal reaction , U.S. regulators said on Thursday .

Test: WASHINGTON ( Reuters ) – AIDS patients should be given a genetic test before treatment with GlaxoSmithKline Plc ‘s drug , Ziagen , to see if they face a higher risk of a potentially fatal reaction , U.S. regulators said on Thursday .

These example occur when slightly edited versions of an article appear on different newswires or in article revisions. Although we have not done a careful study, it appears that something like half of the test sentences in Google’s Billion Word Benchmark are essentially duplicates of training sentences.

In spite of the problems with the Billion Word Benchmark, an appropriate benchmark for comprehension based language modeling should be easy to construct.  For example, we could require that the test sentences be separated by, say, a week from any training sentence.  Or we could simply de-duplicate the articles using a soft-match criterion for duplication.  The training data should consist of complete articles rather than shuffled sentences.  Also note that new test data is continuously available — it is hard to overfit next week’s news.

These are exciting times.  Can we obtain a command of English in a machine simply by training on very large corpora of text?  I find it fun to be optimistic.

Posted in Uncategorized | Leave a comment

Cognitive Architectures

Within the deep learning community there is considerable interest in neural architecture. There are convolutional networks, recurrent networks, LSTMs, GRUs, attention mechanisms, highway networksinception networksresidual networksfractal networks and many more.  Most of these architectures can be viewed as certain feed-forward circuit topologies.  Circuits are a universal model of computation.  However, human programmers find it more productive to specify algorithms in high level programing languages.  Presumably this applies to learning as well — learning should be easier with a higher level architecture.

Of course the deep community is aware of the relationship between neural architecture and models of computation. General “high level” architectures have been proposed. We have neural turing machines (actually random access machines),  parsing architectures with stacks, and neural architectures for functional programming.  There was a nice NIPS workshop on reasoning, attention and memory (RAM) addressing such fundamental architectural issues.

It seems reasonable to use classical models of computation as inspiration for neural architectures.  But it is important to be aware of the large variety of classical architectural ideas.  Various twentieth century discrete architectures may provide a rich source of inspiration for twenty first century differentiable architectures. Here is a list my favorite classical architectural ideas.

Mathematical Logic:   Starting with the ancient Greeks, logic has been developed directly as a model of knowledge representation and thought. Mathematical logic organizes knowledge around entities and relations. Databases are closely related to predicate calculus. Logic is capable of representing knowledge in any domain of discourse. While entities and relations are central to logic, logic involves a variety of additional features such as function application, quantification, and types. Logic also provides the intellectual framework underlying mathematics. Achieving the singularity will presumably require machines to be capable of programming computers. Computer programming seems to require sound analytical (mathematical) reasoning.

Production Systems and Logic Programming:  This style of architecture was championed by Herb Simon and Alan Newel.  It is a way of making logical rules compute efficiently. I will interpret production systems fairly broadly to include various rule-based languages such as SOAROps5, Prolog and Datalog. The cleanest of this family of architectures is bottom-up logic programming which has a nice relationship to general dynamic programming and is the foundation of the Dyna programming langauge. Dynamic programming algorithms can be viewed as feed-forward networks where each entry in a dynamic programming table can be viewed as a structured-output unit which computes its values form earlier units and provides its value to later units.

Inductive Logic ProgrammingThis is a classical unification of machine learning and logic programming championed by Stephen Muggleton. The basic idea is take a set of assertions in predicate calculus (observed data) and generalize them to a “theory” (a logic program) that is consistent and that implies the data.

Frames, Scripts, and Object-Oriented Programming:  Frames and scripts were championed as a general framework for knowledge representation by Marvin Minsky , Roger Schank, and Charles Fillmore. Frames are related to object-oriented programming in the sense that an instance of the “room frame” (or room class) has fillers for fields such as “ceiling”, “windows” and “furniture”.  Frames also seem related to the ontology of mathematics.  For example, a mathematical field consists of a set together with two operations (addition and multiplication) satisfying certain properties.  The term “structure” has a well defined meaning in model theory (a branch of logic) which is closely related to the notion of a class instance in object-oriented programming.  A specific mathematical field is a structure (in the technical sense) and is an instance of the general mathematical class of fields.

The Situation Calculus and Modal Logic:  In the situation calculus statements take meaning in”situations”.  A “fluent” is a mapping from situations to truth values.  Actions change one situation into another.  This leads to the STRIPS model of actions and planning. Situations are closely related to the possible worlds of modal logic.

Monads: Monads generalize the relationship between pure (stateless) functional programming, as in the programming language Haskel, and the more familiar effect-based programming as in C or C++ where assignment statements change the state of the computation.  The mapping (or compilation) from an effect-driven program to a pure (stateless) program defines the state monad.  There are different monads.  The state monad treats each action as a mapping from an input state to an output state. The power set monad (or non-determinism monad) treats each action as a mapping from a set of states to a set of states.  The probability monad treats each action as a mapping from a probability distribution to a probability distribution.  The probability monad gives rise to probabilistic programming languages. There are also more esoteric monads such as the CPS monad which treats each action as mapping a state of the stack to a state of the stack thereby converting recursion to iteration. In a pure language such as Haskel the use of a monad to suppress a state argument typically makes code more readable. There also seem to be a relationship between the states of the common monads and the situations or possible worlds of the situation calculus and modal logic.

Conclusion

I believe that human learning is based on a differentiable universal learning architecture and that domain specific priors are not required. But it is unclear how elaborate the general architecture is.  It seems worth considering the above list of classical architectural ideas and the possibility that these discrete architectures can be made differentiable.

Posted in Uncategorized | 3 Comments

Architectures and Language Instincts

This post is inspired by a recent book and article by Vyvyan Evans declaring the death of the Pinker-Chomsky notion of a language instinct.  I have written previous posts on what I call the Comsky-Hinton axis of innate-knowledge vs. general learning and I have declared that Hinton (general learning) is winning.  I still believe that.  However, I also believe that the learning architecture matters and that finding really powerful general purpose architectures is nontrivial.  It seems harder than just designing neural architectures emulating Turing machines (or random access machines or functional programs).  We should not ignore the possibility that the structure of language is related to the structure of thought and that the structure of thought is based on a nontrivial innate learning architecture.

A basic issue discussed in Evan’s article is the fact that sophisticated language and thought seems to be restricted to humans and seems to have taken at least 500 million years since the development of “simple” functions like vision. Presumably vision has bee based on learning all along.  I put no weight on the claim that other species have similar functions — humans are clearly qualitatively smarter than starlings (or any other animal). Evan’s seems to accept that this is a mystery and does not seem to have a good solution. He attributes it speculatively to socialization and the need to coordinate behaviors. However, there are many social species and only humans have this qualitatively different level of intelligence. Although we have only a sample of one species, this level of intelligence has perfect correlation with the existence of language.

At least one interpretation of the very late and sudden appearance of language is that it arose as a kind of phase transition in gradual improvements in learning.  A phase transition could occur when learning becomes adequate for the cultural transmission of software (language).  In this scenario thought precedes linguistic communication and is based on the internal data structures of a sophisticated neural architecture.  As learning becomes strong enough, it becomes capable of learning to interpret sophisticated communication signals.  This leads to the cultural transmission of “ideas” — concepts denoted by words such as “have” and “get”.  A phase transition then occurs as a culturally transmitted language (system of concepts) takes hold.  A deeper understanding of the world then arises from the cultural evolution of conceptual systems.  The value of absorbing cultural ideas then places  increased selective pressure on the learning system itself.

But the question remains of what general learning architecture is required. After all, it took 500 million years for the phase transition to occur.  I believe there are clues to the structure of the underlying architecture in language itself.  To the extent that this is true, it may not be meaningful to distinguish a “language instinct” from an innate “learning architecture”.  Maybe the innate architecture involves entities and relations, a kind of database architecture,  providing a general (universal) substrate of representation, computation, and learning.  Maybe such an architecture would provide a unification of Chomsky and Hinton …

Posted in Uncategorized | 3 Comments

Why we need a new foundation of mathematics

I just finished what I am confident is the final revision of my paper on the foundations of mathematics.  I want to explain here what this is all about. Why do we need a new foundation of mathematics?

ZFC set theory has stood as a fixed eight-axiom foundation of mathematics since the early 1920s. ZFC does does an excellent job of capturing the notion of provability in mathematics.  Things not provable in ZFC, such as the continuum hypothesis,the existence of inaccessible cardinals, or the consistency of ZFC, are forever out of the reach of mathematics.  As a circumscription of the provable, ZFC suffices.

The problem is that ZFC is untyped. It is a kind of raw assembly language. Mathematics is about “concepts” — groups, fields, vector spaces, manifolds and so on. We want a type system where the types are in correspondence with the naturally occurring concepts. In mathematics every concept (every type) is associated with a notion of isomorphism. We immediately understand what it means for two graphs to be isomorphic. Similarly for groups, fields and topological spaces, and in fact, all mathematical concepts. We need to define what we mean by a concept (a type) in general and what we mean by “isomorphism” at each type.  Furthermore,  we must prove an abstraction theorem that isomorphic objects are inter-substitutable in well-typed contexts.

We also want to prove “Voldemort’s theorem”. Voldemort’s theorem states that certain objects exist but cannot be named.  For example, it is impossible to name any particular point on a circle (unless coordinates are introduced) or any particular node in an abstract complete graph, any particular basis (coordinate system) for an abstract vector space, or any particular isomorphism between a finite dimensional vector space and its dual.

Consider Wigner’s famous statement on the unreasonable effectiveness of mathematics in physics. I feel comfortable paraphrasing this as stating the unreasonable effectiveness of mathematical concepts in physics – the effectiveness of concepts such as manifolds and group representations. Phrased this way, it can be seen as stating the unreasonable effectiveness of type theory in physics. Strange.

The abstraction theorem and Voldemort’s theorem are fundamentally about types and well-typed expressions.  These theorems cannot be formulated in classical (untyped) set theory.  The notion of canonicality or naturality (Voldemort’s theorem) is traditionally formulated in category theory.  But category theory does not provide a grammar of types defining the mathematical concepts. Nor does it give a general definition of the category associated with an arbitrary mathematical concept.  It reduces “objects” to “points” where the structure of the points is supposed to be somehow represented by the morphisms.  I have always had a distinct dislike for category theory.

I started working on this project about six years ago.  I found that defining the semantics of types in a way that the supports the abstraction theorem and Voldemort’s theorem was shockingly difficult (for me at least).  About three years ago I became aware of homotopy type theory (HoTT) and Voevodsky’s univalence axiom which establishes a notion of isomorphism and an abstraction theorem within the framework of Martin-Löf type theory. Vladimir Voevodsky is a fields medalist and full professor at IAS who organized a special year at IAS in 2012 on HoTT.

After becoming aware of HoTT I continued to work on Morphoid type theory (MorTT) without studying HoTT.  This was perhaps reckless but it is my nature to want to do it myself and then compare my results with other work.  In this case it worked out, at least in the sense that MorTT is quite different from HoTT. I quite likely could not have constructed MorTT had I been distracted by HoTT.  Those interested in details of MorTT and its relationship to HoTT should see my manuscript.

Posted in Uncategorized | 1 Comment

The Foundations of Mathematics.

It seems clear that most everyday human language, and presumably most everyday human thought, is highly metaphorical.  Most human statements do not have definite Boolean truth values.  Everyday statements have a degree of truth and appropriateness as descriptions of actual events. But while most everyday language is metaphorical, it is also true that some people are capable of doing mathematics — they are capable of working with statements that do have precise truth values.  Software engineers in particular must deal with the precise concepts and constructs of formal programming languages.  If we are ever going to achieve the singularity it seems likely that we will have to build machines capable of software engineering and hence capable of precise mathematical thought.

How well do we understand mathematical thought?  There are many talented mathematicians in the world.  But the ability to do mathematics does not imply the ability to create a machine that can do mathematics.  Logic is metamathematics — the study of the computational process of dong mathematics.  Metamathematics seems more a part of AI than a part of mathematics.

Mathematical thought exhibits various surprising and apparently magical abilities.  A software engineer understands the distinction between the implementation and the behavior implemented.  Consider a priority queue.  It is obvious that if we have two correct implementations of priority queues then we can (or at least should be able to) swap in one for the other in a program that uses a priority queue.  This notion that behaviorally identical modules are (or should be) interchangeable in large software systems is one of the magical properties of mathematical thought.  How is “identical behavior” understood?

The notion of identical behavior is closely related to the phenomenon of abstraction.  Perhaps the most fundamental abstraction is the concept of a cardinal number such as the number seven.  I can say I have seven apples and you have two apples and together we have nine apples.  The idea that seven plus two is nine abstracts away from the fact that we are talking about apples and seems to involve thought about sets at a more abstract level.  We are taking the union of two disjoint sets.  The union “behavior” is defined at level of cardinal numbers — the abstraction of a set to its number of elements.  Another example of common sense abstraction is the concept of a sequence — the ninth item will be the second item after the seventh item.  Yet another example is the concept of a bag of items such as the collection of coins in my pocket which might be three nickels and two quarters.  We machine leaning researchers have no trouble understanding that a document can be abstracted to a bag of words.  What mental computation underlies this immediate understanding of abstraction?  What is abstraction?

It seems clear that abstraction involves the specification of the interface to an object.  For example the interface to a priority queue is the set of operations it supports.  A bag (multiset) is generally defined as a mapping from a set (such as the set of all words) to a number — the number of times the word occurs in the bag.  But this definition misses the point (in my opinion) that any function f:\sigma \rightarrow \tau defines a bag of \tau.  For example, a document is a function from ordinal numbers “nth” to words.  Any function to words can be abstracted to a bag of words.  The amazing thing is that we do not have to be taught this.  We just know already.  We also know that in passing from a sequence to a bag we are losing information.

Another example is the keys on my key loop.  I understand that my two car keys are next to each other on loop even though there is no “first” key among the eight keys on the loop.  Somehow I understand that “next to” is part of the “behavior” provided by my key loop while “first” is not.

Lakoff and Nunez argue in the book “Where does Mathematics Come From” that we come to understand numbers through a process of metaphor to physical collections.  But I think this is backwards.  I think we are able to understand physical collections because we have an innate cognitive architecture which first parses the world into “things” (entities) and assigns them types and groupings.  A properly formulated system of logic should include, as a fundamental primitive,  a notion of abstraction.  The ability to abstract a group to a cardinality, or a sequence to a bag, seems to arise from a general innate capacity for abstraction.  Presumably there is some innate cognitive architecture capable of supporting “abstract thought” that distinguishes humans from other animals. A proper formulation of logic — of the foundations of mathematics — will presumably provide insight into the nature of this cognitive architecture.

I will end by shamelessly promoting my paper, Morphoid Type Theory,  which I just put on arXiv.  The paper describes a new type theoretic foundation of mathematics focusing on abstraction and isomorphism (identity of behavior).  Unfortunately the paper is highly technical and is mostly written for professional type theorists and professional mathematicians.   Section 2, “inference rules”, should be widely accessible.

Posted in Uncategorized | Leave a comment