CTC and the EG Algotithm: Discrete Latent Choices without Reinforcement Learning

Section 20.9.1 of Goodfellow, Bengio and Courville is titled “Back-Propagating through Discrete Stochastic Operations”.  It discusses the REINFORCE algorithm which requires sampling sequences of discrete latent choices.  It is well known that sampling choices yields a high variance gradient estimate.  While such sampling seems necessary for reinforcement learning applications (such as games) there are other settings where one can train discrete latent choices using a closed form gradient and avoid high variance choice sampling.  This is done in the CTC algorithm (Connectionist Temporal Classification, Graves et al. 2006).  CTC is arguably the currently dominant approach to speech recognition.  It is mentioned in Goodfellow et al. in the section on speech recognition but not in the general context of training discrete latent choices.

This post is also a sequel to the previous post on variational auto-encoders and EM.  In CTC, as in various other models, we are interested in a marginal distribution over latent variables.  In this case we have training pairs (x, y) and are interested in maximizing the marginal distribution.

\Theta^* = \mathrm{argmax}_\Theta\;  E_{(x,y)}\;\ln P_\Theta(y|x)

= \mathrm{argmax}_\Theta \; E_{(x,y)}\;\ln\; \sum_z \;P_\Theta(y,z|x)

In CTC x is a speech signal, y is a (time-compressed) phonetic transcription, and z is a sequence of latent “emission decisions”.  This is described in more detail below.

The previous post noted that both VAEs and EM can be phrased in terms of a unified objective function involving an “encoder distribution” P_\Psi(z|x,y).  Here we have made everything conditional on an input x.

\Psi^*,\Theta^* = \mathrm{argmax}_{\Psi,\Theta}\;E_{(x,y)}\;{\cal L}(\Psi,\Theta,x,y)

{\cal L}(\Psi,\Theta,x,y) = E_{z \sim P_\Psi(z|x,y)}\; \ln P_\Theta(z,y|x) + H(P_\Psi(z|x,y))\;\;\;(1)

= \ln\;P_{\Theta}(y|x) - KL(P_\Psi(z|x,y),P_{\Theta}(z|x,y))\;\;\;(2)

The equivalence between (1) and (2) is derived in the previous post.  In EM one performs alternating optimization where (2) is used to optimize the encoder distribution P_\Psi(z|x,y) while holding the model distribution P_\Theta(y,z|x) fixed, and (1) is used to optimize model distribution P_\Theta(z,y) while holding the encoder distribution P_\Psi(z|x,y) fixed.  In EM each of these optimizations is done exactly in closed form yielding an exact alternating maximization algorithm.  In CTC the E step optimization is computed exactly in closed form but the M step is replaced by an exact closed form calculation of the gradient of (1) with respect to \Theta.  I will call this the EG algorithm (expected gradient).

Example: A Deep HMM Language Model: A deep language model is typically an auto-regressive RNN1.

Screen Shot 2017-11-01 at 10.12.54 AM.png

[Andrej Karpathy]

Here the model stochastically generates a word and then uses the generated word as input in computing the next hidden state.  In contrast to a classical finite-state HMM, the hidden state is determined by the observable word sequence.  In a classical HMM many different hidden state sequences can generate the same word sequence — the hidden states are latent.  We can introduce discrete latent hidden states H_0, \ldots, H_T such that the full hidden state is a pair (h_t, H_t) where h_t is an RNN state vector determined by observable words and H_t is a latent discrete state not determined by the observable sequence.  We allow the discrete hidden state transition probabilities to depend on the hidden vector of an RNN.  In equations we have

h_{t+1} = \mathrm{RNNcell}_\Theta(h_t, e(w_t))

P_\Theta(H_{t+1} \;|\;w_1,\ldots,\;w_t,\;H_1,\ldots,\;H_t) = P_\Theta(H_{t+1}\;|\;h_{t+1},e(H_t))

P_\Theta(w_{t+1} \;|w_1,\ldots,w_t,\;H_1,\ldots,\;H_t) = P_\Theta(w_{t+1}\;|\;h_{t+1},e(H_{t+1})).

Here \Theta is the system of network parameters and e(w) and e(H) are vector embeddings of the word w and discrete state H. This model allows P_\Theta(w_1,\ldots,w_T) to be computed by dynamic programming over the possible hidden state sequences using the forward-backward procedure.  The gradient of the probability can also be computed in closed form and we can do SGD without sampling the discrete hidden states.  For dynamic programming to work it is important that the state vector h_t is determined by h_0 and the observed words w_1,\ldots,w_{t-1} and is not influenced by the hidden state sequence H_1,\ldots,H_{t-1}.

The general case: The general EG algorithm alternates an E step, which is a exact optimization as in EM, and a G step in which we take a gradient step in \Theta defined by computing the gradient of (1):

\nabla_\Theta {\cal L}(\Psi,\Theta,y) = E_{(x,y)}\;E_{z \sim P_\Psi(z|x,y)}\; \nabla_\Theta\;\ln\;P_\Theta(y,z|x)\;\;\;\;\;(3)

By (2), and the fact that that the gradient of the KL divergence is zero when P_\Psi(z|y) = P_\Theta(z|y), for the G step we also have

\nabla_\Theta {\cal L}(\Psi, \Theta,y) = \nabla_\Theta \ln P_\Theta(y|x)\;\;\;(4)

Combining (3) and (4) we see that that the gradient defined by (3) in the G step equals the gradient of the top level objective.

We now consider the case where P_\Theta(z,y|x) is defined by a generative process where the pair (z,y) is generated stochastically by a series of discrete decisions.  Each decision is assumed to be a draw from a certain probability distribution. I will write Q \leadsto j for the decision (event) of drawing j from distribution Q .  Every decision made in the above HMM has the form

P_\Theta(H|h_{t+1},e(H^i)) \leadsto H^j

where H^i and H^j are particular discrete hidden states.  For a sequence of length T and for n discrete hidden states we have a total of Tn^2 possible choice events and these choice events form the edges of an HMM trellis.  Note that it is important that h_t does not depend on the patch through the the trellis.

But getting back to the general case let {\cal C}(z,y|x) be the choices made in generating (z,y) under the model P_\Theta(z,y|x).  We then have

\ln P_\Theta(z,y|x) = \sum_{(Q \leadsto  j) \in {\cal C}(z,y|x)}\;\ln P(j\;|\;Q).

We now have

\nabla_\Theta \ln P_\Theta (y|x) =  E_{z \sim P_\Theta(z|x,y)}\;\nabla_\Theta \;\sum_{(Q \leadsto j) \in {\cal C}(y,z|x)} \;\; \ln P(j\;|\;Q)\;\;\;\;(5)

Now let {\cal C}(y|x) = \bigcup_z \;{\cal C}(y,z|x).  In the HMM example {\cal C}(y|x) is the set of all edges in the HMM trellis while {\cal C}(z,y|x) is one particular path through the trellis.  Also let 1((Q\leadsto j) \in {\cal C}(z,y|x)) be the indicator function that the choice Q \leadsto j occurs when generating (z,y) from x. We can now rewrite (5) as

\nabla_\Theta \ln P_\Theta(y|x) = \sum_{(Q \leadsto j) \in{\cal C}(y|x)} \left(E_{z \sim P_\Theta(z|x,y)} \; 1((Q \rightarrow j)\in{\cal C}(y,z|x))\right)\; \nabla_\Theta\;\ln P_\Theta(j|Q)

The expectation E_z 1((Q \leadsto j) \in {\cal C}(y,z|x)) are assumed to be computable by dynamic programming.  In the HMM example these are the edge probabilities in the HMM lattice.

CTC: In CTC we are given a sequence x_1, \ldots, x_T (typically acoustic feature vectors sampled at 100 frames per second) and we must output a sequence of symbols (typically letters or phonetic symbols) y_1, \ldots, y_N with N \leq T and typically N << T.  More precisely, for network parameters \Theta the model defines P_\Theta(y_1,\ldots,y_N|x_1,\ldots,x_T).  However, in the model there is a latent sequence \hat{y}_1,\ldots,\hat{y}_T where each \hat{y}_T is either an output symbol or the special symbol \bot.  The sequence y_1,\ldots,y_N is derived from \hat{y}_1,\ldots,\hat{y}_T by removing the occurrences of \bot. The model can be written as

h_{t+1} = \mathrm{RNNcell}_\Theta(h_t, x_t)

P\Theta(\hat{y}_{t+1}|x_1,\ldots,x_t) = P_\Theta(\hat{y}_{t+1}|h_{t+1})

We can use the EG algorithm to compute the gradient of \ln P_\Theta(y_1,\ldots,y_N|x_1,\ldots,x_T) where EG sums over all possible sequences \hat{y}_1,\ldots,\hat{y}_T that reduce to y_1,\ldots,y_N.  I leave it to the reader to work out the details of the dynamic programming table. For the dynamic programming to work it is important that h_t does not depend on the latent sequence \hat{y}_1,\ldots,\hat{y}_T.

Footnotes:

  1. A week after this post was originally published Gu et al. published a non-autoregressive language model.
Advertisements
Posted in Uncategorized | 1 Comment

VAE = EM

I recently realized the connection between the expectation maximization algorithm (EM) and variational autoencoders (VAE).  Both optimize the same objective function where VAE performs gradient descent based on a sampling estimate of the gradient while EM performs exact alternating maximization in models where this is possible.  It turns out that the alternative optimization view of EM is described in section 9.4 of Bishop’s 2006 text on machine learning and also in section 19.2 of the text by Goodfellow, Bengio and Courville.   I had overlooked this connection between EM and VAEs, and it seems important, so here is a blog post that helps me, at least, organize my thinking.

First some background.

The expectation maximization (EM) algorithm is a time-honored cornerstone of machine learning.  Its general formulation was given in a classic 1977 paper by Dempster, Laird and Rubin, although many special cases had been developed prior to that.  The algorithm is used for estimating unknown parameters of a probability distribution involving latent variables.  The algorithm is usually introduced by looking at the special case of modeling a collection of points by a mixture of Gaussians.  Here we are given points where the model assumes that each point came from an unlabeled component of the mixture (the component associated with the point is a latent variable). Given the points one must estimate a component weight, a mean and a covariance matrix for each component of the mixture.

Screen Shot 2017-10-02 at 10.06.41 AM.png

This is a non-convex optimization problem. EM iteratively re-estimates the parameters with a guarantee that for each iteration, unless one is already at a parameter setting with zero probability gradients (a stationary point), the probability of the points under the model improves.  There are numerous important special cases such as learning the parameters of a hidden Markov model or a probabilistic context free grammar.

Variational autoencoders (VAEs) (tutorial) were introduced in 2014 by Kingma and Welling in the context of deep learning.  Autoencoding is related to compression.  JPEG compresses an image into an encoded form that uses less memory but can then be decompressed to get the image back with some loss of resolution.

Screen Shot 2017-10-02 at 10.09.13 AM.png
x \hspace{1.0in}P_\Psi(z|x) \hspace{1.0in} z \hspace{1.0in} P_\Theta(z,x)\hspace{1.0in}x'

[Kevin Franz]

Information theoretically, compression is closely relate to distribution modeling. Shannon’s source coding theorem states that for any probability distribution, such as the distribution on “natural” images, one can (block) code the elements using a number of bits per element equal to the information-theoretic entropy of the distribution.  In the above figure the average number of bits in the compressed form in the middle should be equal to the entropy of the probability distribution over images.

VAEs, however, do not directly address the compression problem.  Instead they focus on fitting the parameters \Theta of the distribution P_\Theta(z,x) so as to maximize the marginal probability P_\Theta(x) of a given set x of samples (such as a set of points or a set of images). The encoder is just a tool to make this parameter fitting more efficient. The problem being solved by a VAE is the same as the problem being solved by EM — fitting the parameters of a probability distribution to given data where the model includes latent variables not specified in the data.

VAE = EM. Both EM and VAEs are formulated in terms of the following objective function on the two distributions P_\psi(z|y) (the encoder) and P_\Theta(y,z) (the model defining P(z) and P(y|z)).

\Psi^*,\Theta^* = \mathrm{argmax}_{\Psi,\Theta}\;\sum_y\;{\cal L}(\Psi,\Theta,y)

{\cal L}(\Psi,\Theta,y) = E_{z \sim P_\Psi(z|y)}\; \ln P_\Theta(z,y) + H(P_\Psi(z|y))\;\;\;(1)

= \ln\;P_{\Theta}(y) - KL(P_\Psi(z|y),P_{\Theta}(z|y))\;\;\;(2)

Here y is summed over training data. The equivalence of  (1) and (2) is important as they provide different outlooks on the objective.   The equivalence is implied by the following observation.

E_{z \sim P_\Psi(z|y)} \;\ln P_\Theta(z,y)
= E_{z \sim P_\Psi(z|y)} \;\ln P_\Theta(y)P_\Theta(z|y)
= E_{z \sim P_\Psi(z|y)}\;\ln P_\Theta(y) + \;E_{z \sim P_\Psi(z|y)}\;\ln P_\Theta(z|y)
= \ln P_\Theta(y) + \;E_{z \sim P_\Psi(z|y)}\;\ln P_\Theta(z|y)

EM can be defined as alternating optimization of \sum_y {\cal L}(\Psi,\Phi,y) where (2) supports the optimization of \Psi (the E step) and (1) supports the optimization of \Theta (the M step).  The EM algorithm applies to models where both the E step and the M step can be solved efficiently exactly and for such models each M step is guaranteed to improve \sum _y \ln P_\Theta(y).  For these models this alternating optimization is typically far superior to any form of gradient descent on \sum_y \ln P_\Theta(y).

VAEs are used in models where the alternating optimization cannot be done efficiently in closed form.  A VAE performs gradient descent on (1) where the gradient is estimated by sampling z from the encoder P_\Psi(z|y).  This typically involves a “reparameterization trick”.  One represents P_\Psi(z|y) by taking z = g_\Psi(y,\eta) where \eta is a random variable of a fixed distribution. We can then rewrite (1) as

{\cal L}(\Psi,\Theta,y) = E_{\eta}\; \ln P_\Theta(g_\Psi(y,\eta),y) + H(P_\Psi(g_\Psi(y,\eta)|y))\;\;\;(3)

It is common to take P(\eta)P_\Psi(z|y), P_\Theta(z) and P_\Theta (y|z) to all be high dimensional Gaussians each represented by a mean and a (diagonal) covariance . The noise variable \eta is taken to be a fixed zero mean isotropic Gaussian while P_\Psi(z|y), P_\Theta(z) and P_\Theta (y|z) have means and covariances computed by deep networks.  We can now calculate the gradient of (3) for any given sample of \eta.

It is not clear whether the term “variational auto encoder” should imply the use of Gaussians.  Goodfellow et al. seem to take the position that VAEs are defined by gradient descent on (1) where the gradient is estimated by sampling z from P_\Psi(z|y).  This does not necessarily involve Gaussians.  A version of HMM-VAEs has recently been formulated which includes discrete latent HMM states.  However the observed variable y is still continuous and Gaussians are still involved.  Presumably one can formulate VAEs where both y and z are structured discrete variables, such a language model with discrete latent states.  These VAEs would presumably not involve Gaussians.

Other approaches to optimizing {\cal L}(\Psi,\Theta,y) are possible. The speech recognition algorithm CTC uses an exact E step but a gradient M step and achieves gradient descent on deep networks with discrete latent variables but without latent variable sampling.  See the following blog post for a discussion of the CTC approach which we might call the EG (expected gradient) algorithm.

Posted in Uncategorized | Leave a comment

Deep Meaning Beyond Thought Vectors

I ended my last post by saying that I might write a follow-up post on current work that seems to exhibit progress toward natural language understanding.   I am going to discuss a couple sampled papers but of course these are not the only promising papers.  I just want to give some examples to argue that significant progress is being made and that there is no obvious limit. I will start with the following quotation which has gained considerably currency lately.

“You can’t cram the meaning of a whole %&!$ing sentence into a single $&!*ing vector!”

Ray Mooney

This quotation seems to be becoming a legacy similar to that of Jelinek’s quotation that “every time I fire a linguistic our speech recognition error rate improves”. However, while the underlying phenomenon pointed to by Jelinek — the dominance of learning over design — has been controversial over the years, I suspect that there is already a large consensus that Mooney is correct — we need more than “thought vectors”.

Meaning as a sequence of vectors: Attention

The first step beyond representing a sentence as a single vector is to represent a sentence as a sequence of vectors. The attention mechanism of machine translation systems essentially does this. It takes the meaning of the input sentence to be the sequence of hidden state vectors of an RNN (LSTM or GRU).  As the translation is generated the attention mechanism extracts information from interior of the input sentence.  There is no clear understanding of what is being represented, but the attention mechanism is now central to high performance translation systems such as that recently introduced by Google.  Attention is old news so I will say no more.

Meaning as a sequence of vectors: Graph-LSTMs

Here I will mention two very recent papers.  The first is a paper that appeared at ACL this year by Nanyun Peng, Hoifung Poon, Chris Quirk, Kristina Toutanova, Wen-tau Yih, entitled “Cross-Sentence N-ary Relation Extraction with Graph LSTMs“.  This paper is on relation extraction  — extracting gene-mutation-drug triples from the literature on cancer mutations.  However, here I am focused on the representation used for the meaning of text.  As in translation, text is converted to a sequence of vectors with a vector for each position in the text.  However, this sequence is not computed by a normal LSTM or GRU.  Rather the links of a dependency parse are added as edges between the positions in the sentence resulting in a graph that includes both the dependency edges and the “next-word” edges as in the following figure from the paper.Screen Shot 2017-08-31 at 3.30.47 PM.png

The edges are divided into those that go left-to-right and those that go right-to-left as shown in the second part of the figure.  Each of these two edge sets forms a DAG (a directed acyclic graph).  Tree LSTMs immediately generalize to DAGS and I might prefer the term DAG-LSTM to the term Graph-LSTM used in the title.  They then run a DAG-LSTM from left to right over the left-to-right edges and a DAG-LSTM from right-to-left over the right-to-left edges and concatenate the two vectors at each position.  The dependency parse is provided by an external resource.  Importantly, different transition parameters are used for different edge types — the “next-word” edge parameters are different from the “subject-of” edge parameters which are different from the “object-of” edge prameters.

A startlingly similar but independent paper was posted on arXiv in March by Bhuwan Dhingra, Zhilin Yang, William W. Cohen and Ruslan Salakhutdinov, entitled “Linguistic Knowledge as Memory for Recurrent Neural Networks“.  This paper is on reading comprehension but here I will again focus on the representation of the meaning of text.  They also add edges between the positions in the text.  Rather than add the edges of a dependency parse they add coreference edges and edges for hyponym-hypernym relations as in the following figure from the paper.

Screen Shot 2017-08-31 at 3.45.46 PM.png

As in the previous paper, they then run a DAG-RNN (a DAG-GRU) from left-to-right and also right-to-left.  But they use every edge in both directions with different parameters for different types of edges and different parameters for the left-to-right and right-to-left directions of the same edge type.  Again the coreference edges and hypernym/hyponym edges are provided by an external resource.

Meaning as a sequence of vectors: Self-Attention

As a third pass I will consider self attention as developed in “A Structured Self-attentive Sentence Embedding” by Lin et al. (IBM Watson and the University of Montreal) and “Attention is all you need” by Vaswani et al. (Google Brain, Google Research and U. of Toronto).  Although only a few months old these papers are already well known.  Unlike the Graph-LSTM papers above, these papers learn graph structure on the text without the use of external resources.  They also do not use any form of RNN (LSTM or GRU) but rely entirely on learned graph structure. The graph structure is created through self-attention.  I will take the liberty of some simplification —I will ignore the residual skip connections and various other details.  We start with the sequence of input vectors.  For each position we apply three matrices to get the three different vectors — a key vector, a query vector, and a value vector.  For each position we take the query vector at that position inner product the key vector at every other position to get an attention weighting (or set of weighted edges) from that position to the other positions in the sequence (including itself).  We then then weight the value vectors by that weighting and pass the result through a  feed forward network to get a vector at that position at the next layer.  This is repeated for some number of layers.  The sentence representation is the sequence of vectors in the last layer.

The above description ignores the multi-headed feature described in both papers.  Rather than just compute one attention graph they compute compute several different attention graphs each of which is defined by different parameters.  Each of these attention graphs might be interpreted as a different type of edge, such as the agent semantic role relation between event verbs and their agents. But the interpretations of deep models is always difficult.  The following figure from Vaswani et al. shows the edges from the word “making” where different colors represent different edge types.  Although the sentence is shown twice, the edges are actually between the words of a single sequence at a single layer of the model.

Screen Shot 2017-08-31 at 6.13.14 PM.pngThe following figure shows one particular edge type which is possibly interpretable as a coreference relation.

Screen Shot 2017-08-31 at 6.21.40 PM.png

We might call these networks self-attention networks (SANs). Note that they are sans LSTMs and sans CNNs :).

Meaning as an embedded knowledge graph

I want to complain at this point that you can’t cram the meaning of a bleeping sentence into a bleeping sequence of vectors.  The graph structures on the positions in the sentence used in the above models should be exposed to the user of the semantic representation.  I would take the position that the meaning should be an embedded knowledge graph — a graph on embedded entity nodes and typed relations (edges) between them.  A node representing an event can be connected through edges to entities that fill the semantic roles of the event type (a neo-Davidsonian representation of events in a graph structure).

One paper in this direction is “Dynamic Entity Representations in Neural Language Models” by Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi and Noah A. Smith (EMNLP 2017). This model jointly learns to identify mentions, coreference, and entity embeddings (a vector representing the object referred to).

Another related paper is “Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision” by Chen Lian, Jonathan Berant, Quoc Le, Kenneth D. Forbus and Ni Lao which appeared at ACL this year.  This paper addresses the problem of natural language question answering using freebase.  But again I will focus on the representation of the input sentence (question in this case). They convert the input question to a kind of graph structure where each node in the graph denotes a set of freebase entities.  An initial set of nodes is constructed by an external linker that links mentions in the question, such as “city in the United states”, to a set of freebase entities. A sequence-to-sequence model is then used to introduce additional nodes each one of which is defined by a relation to previously introduced nodes. This “graph” can be viewed as a program with instructions of the form n_i = op(R, n_j) where n_i is a newly defined node, n_j is a previously defined node, R is a database relation such as “born-in” and op is one of “hop”, “argmax”, “argmin”, or “filter”.  For example

n_1 = US-city;  n_2 = hop(born-in, n_1);  n_3 = argmax(population-of, n_1)

defines n_1 as the set of US cities, n_2 as the set of people born in a US city and n_3 as the largest US city.  Answering the input question involves running the generated program on freebase.  The sequence to sequence model is trained from question-answer pairs without any gold semantic graph output sequences. They use some fancy tricks to get REINFORCE to work. But the point here is that semantic parsing of this type converts input text to a kind of graph structure with embedded nodes (embedded by the sequence-to-sequence model). Another important point is that constructing the semantics involves background knowledge — freebase in this case.  Language is highly referential and “meaning” must ultimately involve reference resolution — linking to a knowledge base.

What’s next?

A nice survey of semantic formalisms is given in “The State of the Art in Semantic Representation” by Omri Abend and Ari Rappoport which appeared at this ACL.  A fundamental issue here is the increasing dominance of learning over design.  I believe that in the near term it will not be possible to separate semantic formalisms from learning architectures.  (In the long term there is the foundations of mathematics …) If the target semantic formalism is a neo-Davidsonian embedded knowledge graph, then this formalism must be unified with some learning architecture. The learning should be done in the presence of background knowledge — both semantic and episodic long term memory.  Background knowledge could itself be an embedded knowledge graph.  The paper “node2vec: Scalable Feature Learning for Networks” by Grover and Leskovec gives a method of embedding a given (unembedded) knowledge graph. But it ultimately seems better to generate the embedding jointly with acquiring the graph.

I have long believed that the most promising cognitive (learning) architecture is bottom-up logic programming.  Bottom-up (forward chaining) logic programming has a long history in the form of production systems, datalog, and Jason Eisner’s Dyna programming language.  There are strong theoretical arguments for the centrality of bottom-up logic programming. I have heard rumors that there is currently a significant effort at DeepMind based on deep inductive logic programming (Deep ILP) for bottom-up programs and that a paper will appear in the next few months.  I have high hopes …

Posted in Uncategorized | 1 Comment

The Plausibility of Near-Term Machine Sentience.

When should we expect “operational sentience” — the point where the most effective way to interact with a machine is to assume it is sentient — to assume that it understands what we tell it.  I want to make an argument that near-term machine sentience in this sense is plausible where near-term means, say, ten years.

Deep learning has already dramatically improved natural language processing. Translation, question answering, parsing, and linking have all improved. A fundamental question is whether the recent advances in translation and linking provide evidence that we are getting close to “understanding”. I do not want to argue that we are getting close,
but rather just that we don’t know and that near-term sentience is “plausible”.

My case is based on the idea that paraphrase, entailment, linking, and structuring may be all that is needed. Paraphrase is closely related to translation — saying the same thing a different way or in a different language.  There has been great progress here. Entailment is determining if one statement implies another.  Classical logic was developed as model of entailment. But natural language entailment is far to slippery to be modeled by any direct application of formal logic. I interpret the progress in deep learning applied to NLP question answering as evidence for progress in entailment. Entailment is also closely related to paraphrase (no paraphrase is precisely meaning preserving) and the progress in translation seems related to potential progress in entailment. “Linking” means tagging natural language phrases with the database entries that the phrase is referring to.  The database can be taken to be freebase or wikidata or any knowledge graph. This is related to the philosophical problem of “reference” – what is our language referring to. But simply tagging phrases with database entries, such as “Facebook” or “the US constitution“, does not seem philosophically mysterious. I am using the term “structuring” to mean knowledge base population (KBP) – populating a knowledge base from unstructured text. Extracting entities and relationships (building a database) from unstructured text does not seem philosophically mysterious. It seems plausible to me that paraphrase, entailment, linking, and KBP will all see very significant near-term advances based on deep learning. The notions of “database” and “inference rule” (as in datalog) presumably have to be blended with the distributed representations of deep learning and integrated into “linking” and “structuring”. This seems related to memory networks with large memories. But I will save that for another post.

The plausibility of near-term machine sentience is supported by the plausibility that
language understanding is essentially unrelated to, and independent of, perception and action other than inputing and outputting language itself. There is a lot of language data out there. I have written previous blog posts against the idea of “grounding” the semantics of natural language in non-linguistic perception and action or in the physical position of material in space.

Average level human natural language understanding may prove to be easier than, say, average level human vision or physical coordination. There has been evolutionary
pressure on vision and coordination much longer than there has been evolutionary pressure on NLP understanding. For me the main question is how close are we to effective paraphrase, entialment, linking and structuring.  NLP researchers are perhaps the best AI practitioners to comment this blog post. I believe that Gene Charniak, a pioneer of machine parsing, believes that machine NLP understanding is at least a hundred years off. But I am more interested in laying out concrete arguments, one way or the other, than in taking opinion polls. Deep learning may have the ability to automatically handle the hundreds of thousands of linguistic phenomenon that seem to exist in, say, English. People learn language. Is there some reasoned argument that this cannot work within a decade?

Maybe at some point I will write a longer post outlining what current work seems to me to be on the path to machine sentience.

Posted in Uncategorized | Leave a comment

Formalism, Platonism and Mentalese

This is a sequel to an earlier post on Tarski and Mentalese.  I am writing this sequel for two reasons. First, I just posted a new version of my treatment of type theory which focuses on “naive semantics”. I want to explain here what that means. Second, after many attempts to get formalists to accept Platonic proofs I have decided that it is best to approach the issue from the formalist point of view. In this new approach I present Platonism as just a translation from a formal language (such as a programming language or mathematical logic) to some symbolic internal mentalese. Human mathematical thought can presumably be modeled, to some extent at least, as symbolic computation involving expressions of mentalese.  Formalists will accept the idea of a symbolic language of thought but some deep learning people resist the idea that symbolic computation is relevant to any form of AI.  I will avoid that latter discussion here and assume that a mentalese of mathematics exists.

To understand the issues of formalism vs. Platonism and the relationship of both to semantics it is useful to consider an example. In type theory we have dependent pair types. These are written fairly obscurely as \Sigma_{x:\sigma} \;\tau[x].  This expression denotes a type (think class). An instance of this type (or class) is a pair (x,y) where  x is an instance of the type \sigma and y is an instance of the type \tau[x] where \tau[x] is a type (class) whose definition involves the value x.  To make this more concrete consider the following example.

DiGraph = \Sigma_{s:set}\;(s \times s \rightarrow \mathbf{Bool})

This is the type (or class) of directed graphs where we are taking a directed graph to be a pair (N,P) where N is a set of “nodes” and P is an “edge predicate” on the nodes which is true of two nodes if there is a directed edge between them.

The above paragraph defines the notation \Sigma_{x:\sigma}\;\tau[x] in a way that would be sufficient when introducing notation in a mathematics class.   Mathematical notation is generally defined before it is used.  Above we define the notation \Sigma_{x:\sigma}\;\tau[x] in much the same way that any mathematical notation is defined in the practice of mathematics. A simpler example would be to introduce the symbol \cup by saying that for sets s and u write s \cup u to denote the union of the sets s and u.  This style of definition provides a translation of formal notation into the natural language (such as English) which is the language of mathematical discourse in practice.

But logicians are not satisfied with this style of definition.  They prefer a more formal treatment known as Tarskian compositional semantics.  My previous post introduces this by treating the semantics of arithmetic expressions.  We introduce a value function {\cal V}[e]\rho where e is an arithmetic expression and \rho is a variable interpretation assigning a value to the variables occurring in e.  Now consider how notation is defined in mathematics.  We might say “we write n! for the the product of the natural numbers from 1 to n”.  We can write a compositional semantics specification of the meaning of n! as follows.

V[e!]\rho \equiv \prod_{i=1}^{V[e]\rho}\;i

Here V[w]\rho is the notation for the value of e under variable interpretation \rho.  This looks more explicitly like a translation between formal notations. However, this notation assumes that the product notation is already understood — can be reasoned about in internal mentalese.  This notation can still be interpreted as a translation from an external notation to mentalese.

In type theory, as in most programming languages, variables are typed.  A math class might start with the sentence “Let B be a Banach space.”.  In type theory this is a declaration of the variable B as being of type “Banach space”.  A set of variable declarations and assumptions constraining the variable values is called a “context”. Contexts are often denoted by \Gamma.  In my work on type theory I subscript the value function with a context declaring the types of the free variables that can occur in the expression. For example we might have

V_{G:\mathbf{DiGraph}}\;[e[G]]\rho

Here \rho must assign a value to the variable G and that value must be a directed graph.  The type of directed graphs can be expressed as a dependent pair type as described above.

The phrase “naive semantics”, or more specifically “naive type theory”, involves assigning the expressions  their naive meaning.  For example we have

V_\Gamma[\Sigma_{x:\sigma}\;\tau[x]]\rho =\{(a,b),\;a \in V_\Gamma[\sigma]\rho,\;b \in V_{\Gamma;\;x:\sigma}[\tau[x]]\rho[x \leftarrow a]\}

This is just the first definition of the dependent pair type notation given above expressed within a Tarskian compositional semantics.  The above definition can be viewed as simply providing a translation of a formal expression into the notation of practical mathematical discourse and hence as a translation of formal expressions into the language of mentalese.

Of course when push comes to shove mathematics grounds out in sets. The notion of “set” is taken to be primitive and is not defined in terms of more basic notions.  We have learned, however, to be precise about the properties of sets using the axioms of ZFC.  In the modern understanding of sets (since about 1925) we distinguish sets from classes where classes are collections too large to be sets such as the collection of all groups or the collection of all topological spaces.  Naive type theory starts from this (sophisticated) notion of sets and classes and then gives all the other expressions of type theory their naive meanings.  This is no different from the introduction of notation in any particular area of mathematics except that type theory is about all the areas of mathematics — it is part of the foundations of mathematics.

Naive type theory is almost trivial.  However, considerable effort is required to handle the notion of isomorphism within naive type theory.  Handling isomorphism within naive type theory is the main point of my work in this area.

Posted in Uncategorized | Leave a comment

Comprehension Based Language Modeling

One of the holy grails of the modern deep learning community is to develop effective methods for unsupervised learning.  I have always held out hope that the semantics of English could be learned from raw unlabeled text.  The plausibility of a statement should affect the probability of that statement. It would seem that a perfect language model — one approaching the true perplexity or entropy of English — must incorporate semantics.  For this reason I read with great interest the recent dramatic improvement in language modeling achieved by Jozefowicz et. al. at Google Brain.  They report a perplexity of about 30 on Google’s One Billion Word Benchmark.  Perplexities below 60 have been very difficult to achieve.

At TTIC we have been working on machine comprehension as another direction for the acquisition of semantics. We created a “Who Did What” benchmark for reading comprehension based on the LDC Gigaword Corpus.  I will discuss reading comprehension a bit more below, but I first want to focus on the nature of news corpora.  We found that the Gigaword corpus contained multiple articles on the same events.  We exploited this fact in the construction of our Who-did-What dataset.  I was curious how the Billion Word Benchmark handled this multiplicity.  Presumably the training data contained information about the same entities and events that occur in the test sentences. What is the entropy of a statement under the listener’s probability distribution when the listener knows (the semantic content of) what the speaker is going to say?  To make this issue clear, here are some examples of training and test sentences from Google’s One Billion Word Corpus:

Train: Saudi Arabia on Wednesday decided to drop the widely used West Texas Intermediate oil contract as the benchmark for pricing its oil.

Test: Saudi Arabia , the world ‘s largest oil exporter , in 2009 dropped the widely used WTI oil contract as the benchmark for pricing its oil to customers in the US.

Train: Bobby Salcedo was re-elected in November to a second term on the board of the El Monte City School District.

Test: Bobby Salcedo was first elected to the school board in 2004 and was re-elected in November.

These examples were found by running the Lucene information retrieval system on the training data using the test sentence as the query (thanks to Hai Wang).  These examples suggest that language modeling is related to reading comprehension.  Reading comprehension is the task of answering a multiple choice question about a passage involving entities and events not present in general structured knowledge sources. Recently several large scale reading comprehension benchmarks have been created with cloze-style questions — questions formed by deleting a word or phrase from a corpus sentence or article summary.  Cloze-style reading comprehension benchmarks include the CNN/Daily Mail benchmark, the Children’s Book Test, and our Who-did-What benchmark mentioned above.  There is also a recently constructed LAMBADA dataset which is intended as a language modeling benchmark but which can be approached most effectively as a reading comprehension task (Chu et. al.).  I predict a convergence of reading comprehension and language modeling — comprehension based language modeling.

For comprehension based language modeling to make sense one should have training data describing the same entities and events as those occurring in the test data.  It might be good to measure the entropy (or perplexity) of just the content words, or even just named entities.  But one should of course avoid including the test sentences in the training data. It turns out that Google’s Billion Word Benchmark has a problem in this regard.  We found the following examples:

Train: Al Qaeda deputy leader Ayman al-Zawahri called on Muslims in a new audiotape released Monday to strike Jewish and American targets in revenge for Israel ‘s offensive in the Gaza Strip earlier this month.

Test: Al-Qaida deputy leader Ayman al-Zawahri called on Muslims in a new audiotape released Monday to strike Jewish and American targets in revenge for Israel ‘s recent offensive in the Gaza Strip.

Train: RBS shares have only recently risen past the average level paid by the Government , but Lloyds shares are still low despite a recent banking sector rally.

Test: RBS shares have only recently risen past the average level paid by the government , but Lloyds shares are still languishing despite the recent banking sector rally .

Train: WASHINGTON – AIDS patients should have a genetic test before treatment with GlaxoSmithKline Plc ‘s drug Ziagen to see whether they face a higher risk of a potentially fatal reaction , U.S. regulators said on Thursday .

Test: WASHINGTON ( Reuters ) – AIDS patients should be given a genetic test before treatment with GlaxoSmithKline Plc ‘s drug , Ziagen , to see if they face a higher risk of a potentially fatal reaction , U.S. regulators said on Thursday .

These example occur when slightly edited versions of an article appear on different newswires or in article revisions. Although we have not done a careful study, it appears that something like half of the test sentences in Google’s Billion Word Benchmark are essentially duplicates of training sentences.

In spite of the problems with the Billion Word Benchmark, an appropriate benchmark for comprehension based language modeling should be easy to construct.  For example, we could require that the test sentences be separated by, say, a week from any training sentence.  Or we could simply de-duplicate the articles using a soft-match criterion for duplication.  The training data should consist of complete articles rather than shuffled sentences.  Also note that new test data is continuously available — it is hard to overfit next week’s news.

These are exciting times.  Can we obtain a command of English in a machine simply by training on very large corpora of text?  I find it fun to be optimistic.

Posted in Uncategorized | Leave a comment

Cognitive Architectures

Within the deep learning community there is considerable interest in neural architecture. There are convolutional networks, recurrent networks, LSTMs, GRUs, attention mechanisms, highway networksinception networksresidual networksfractal networks and many more.  Most of these architectures can be viewed as certain feed-forward circuit topologies.  Circuits are a universal model of computation.  However, human programmers find it more productive to specify algorithms in high level programing languages.  Presumably this applies to learning as well — learning should be easier with a higher level architecture.

Of course the deep community is aware of the relationship between neural architecture and models of computation. General “high level” architectures have been proposed. We have neural turing machines (actually random access machines),  parsing architectures with stacks, and neural architectures for functional programming.  There was a nice NIPS workshop on reasoning, attention and memory (RAM) addressing such fundamental architectural issues.

It seems reasonable to use classical models of computation as inspiration for neural architectures.  But it is important to be aware of the large variety of classical architectural ideas.  Various twentieth century discrete architectures may provide a rich source of inspiration for twenty first century differentiable architectures. Here is a list my favorite classical architectural ideas.

Mathematical Logic:   Starting with the ancient Greeks, logic has been developed directly as a model of knowledge representation and thought. Mathematical logic organizes knowledge around entities and relations. Databases are closely related to predicate calculus. Logic is capable of representing knowledge in any domain of discourse. While entities and relations are central to logic, logic involves a variety of additional features such as function application, quantification, and types. Logic also provides the intellectual framework underlying mathematics. Achieving the singularity will presumably require machines to be capable of programming computers. Computer programming seems to require sound analytical (mathematical) reasoning.

Production Systems and Logic Programming:  This style of architecture was championed by Herb Simon and Alan Newel.  It is a way of making logical rules compute efficiently. I will interpret production systems fairly broadly to include various rule-based languages such as SOAROps5, Prolog and Datalog. The cleanest of this family of architectures is bottom-up logic programming which has a nice relationship to general dynamic programming and is the foundation of the Dyna programming langauge. Dynamic programming algorithms can be viewed as feed-forward networks where each entry in a dynamic programming table can be viewed as a structured-output unit which computes its values form earlier units and provides its value to later units.

Inductive Logic ProgrammingThis is a classical unification of machine learning and logic programming championed by Stephen Muggleton. The basic idea is take a set of assertions in predicate calculus (observed data) and generalize them to a “theory” (a logic program) that is consistent and that implies the data.

Frames, Scripts, and Object-Oriented Programming:  Frames and scripts were championed as a general framework for knowledge representation by Marvin Minsky , Roger Schank, and Charles Fillmore. Frames are related to object-oriented programming in the sense that an instance of the “room frame” (or room class) has fillers for fields such as “ceiling”, “windows” and “furniture”.  Frames also seem related to the ontology of mathematics.  For example, a mathematical field consists of a set together with two operations (addition and multiplication) satisfying certain properties.  The term “structure” has a well defined meaning in model theory (a branch of logic) which is closely related to the notion of a class instance in object-oriented programming.  A specific mathematical field is a structure (in the technical sense) and is an instance of the general mathematical class of fields.

The Situation Calculus and Modal Logic:  In the situation calculus statements take meaning in”situations”.  A “fluent” is a mapping from situations to truth values.  Actions change one situation into another.  This leads to the STRIPS model of actions and planning. Situations are closely related to the possible worlds of modal logic.

Monads: Monads generalize the relationship between pure (stateless) functional programming, as in the programming language Haskel, and the more familiar effect-based programming as in C or C++ where assignment statements change the state of the computation.  The mapping (or compilation) from an effect-driven program to a pure (stateless) program defines the state monad.  There are different monads.  The state monad treats each action as a mapping from an input state to an output state. The power set monad (or non-determinism monad) treats each action as a mapping from a set of states to a set of states.  The probability monad treats each action as a mapping from a probability distribution to a probability distribution.  The probability monad gives rise to probabilistic programming languages. There are also more esoteric monads such as the CPS monad which treats each action as mapping a state of the stack to a state of the stack thereby converting recursion to iteration. In a pure language such as Haskel the use of a monad to suppress a state argument typically makes code more readable. There also seem to be a relationship between the states of the common monads and the situations or possible worlds of the situation calculus and modal logic.

Conclusion

I believe that human learning is based on a differentiable universal learning architecture and that domain specific priors are not required. But it is unclear how elaborate the general architecture is.  It seems worth considering the above list of classical architectural ideas and the possibility that these discrete architectures can be made differentiable.

Posted in Uncategorized | 3 Comments