Throughout my career I have believed that progress in AI arises from objective evaluation metrics. When I first learned of GANs I was immediately skeptical because of the apparent lack of meaningful metrics. While GANs have made enormous progress in generating realistic images, the problem of metrics remains (see Borji 2018). I suspect that the lack of meaningful metrics is related to a failure of GANs to play an important role in unsupervised pre-training (feature learning) for discriminative applications. This is in stark contrast to language models (ELMO, BERT and GPT-2) where simple cross-entropy loss has proved extremely effective for pre-training.
Cross-entropy loss is typically disregarded for GANs in spite of the fact that it is the de-facto metric for modeling distributions and in spite of its success in pre-training for NLP tasks. In this post I argue that rate-distortion metrics — a close relative of cross entropy loss — should be a major component of GAN evaluation (in addition to discrimination loss). Furthermore, evaluating GANs by rate-distortion metrics leads to a conceptual unification of GANs, VAEs and signal compression. This unification is already emerging from image compression applications of GANS such as in the work of Agustsson et al 2018. The following figure by Julien Despois can be interpreted in terms of VAEs, signal compression, or GANs.
The VAE interpretation is defined by
Now define as the minimum of (1) over
while holding
and
fixed. Using this to express the objective as a function of
and
, and assume universal expressiveness of
, the standard ELBO analysis shows that (1) reduces to minimizing cross-entropy loss of
.
It should be noted, however, that differential entropies and cross-entropies suffer from the following conceptual difficulties.
- The numerical value of entropy and cross entropy depends on an arbitrary choice of units. For a distribution on lengths, probability per inch is numerically very different from probability per mile.
- Shannon’s source coding theorem fails for continuous densities — it takes an infinite number of bits to specify a single real number.
- The data processing inequality fails for differential entropy —
has a different differential entropy than
.
- Differential entropies can be negative.
For continuous data we can replace the differential cross-entropy objective with a more conceptually meaningful rate-distortion objective. Independent of conceptual objections to differential entropy, a rate-distortion objective allows for greater control of the model through a rate-distortion tradeoff parameter as is done in
-VAEs (Higgens et al. 2017, Alemi et al 2017). A special case of a
-VAE is defined by
The VAE optimization (1) can be transformed into the rate-distortion equation (3) by taking
and taking to be a fixed constant. In this case (1) transforms into (3) with
. Distortion measures such as L1 and L2 preserve the units of the signal and are more conceptually meaningful than differential cross-entropy. But see the comments below on other obvious issues with L1 and L2 distortion measures. KL-divergence is defined in terms of a ratio of probability densities and, unlike differential entropy, is conceptually well-formed.
Equation (3) leads to the signal compression interpretation of the figure above. It turns out that the KL term in (3) can be interpreted as a compression rate. Let be the optimum
in (3) for a fixed value of
and
. Assuming universality of
, the resulting optimization of
and
becomes the following where
The KL term can now be written as a mutual information between and
.
Hence (4) can be rewritten as
A more explicit derivation can be found in slides 17 through 21 in my lecture slides on rate-distortion auto-encoders.
By Shannon’s channel capacity theorem, the mutual information is the number of bits transmitted through a noisy channel from
to
— it is the number of bits from
than reach the decoder
. In the figure
is defined by the equation
for some fixed noise distribution on
. Adding noise can be viewed as limiting precision. For standard data compression, where
must be a compressed file with a definite number of bits, the equation
can be interpreted as a rounding operation that rounds
to integer coordinates. See Agustsson et al 2018.
We have now unified VAEs with data-compression rate-distortion models. To unify these with GANs we can take and
to be the generator of a GAN. We can train the GAN generator
in the traditional way using only adversarial discrimination loss and then measure a rate-distortion metric by training
to minimize (3) while holding
and
fixed. Alternatively, we can add a discrimination loss to (3) based on the discrimination between
and
and train all the parameters together. It seems intuitively clear that a low rate-distortion value on test data indicates an absence of mode collapse — it indicates that the model can efficiently represent novel images drawn from the population. Ideally, the rate-distortion metric should not increase much as we add weight to a discrimination loss.
A standard objection to L1 or L2 distortion measures is that they do not represent “perceptual distortion” — the degree of difference between two images as perceived by a human observer. One interpretation of perceptual distortion is that two images are perceptually similar if the are both “natural” and carry “the same information”. In defining what we mean by the same information we might invoke predictive coding or the information bottleneck method. The basic idea is to find an image representation that achieves compression while preserving mutual information with other (perhaps future) images. This can be viewed as an information theoretic separation of “signal” from “noise”. When we define the information in an image we should be disregarding noise. So while it is nice to have a unification of GANs, VAEs and signal compression, it would seem better to have a theoretical framework providing a distinction between signal and noise. Ultimately we would like a rate-utility metric for perceptual representations.