Question

A paper I was reading, http://www.cs.toronto.edu/~ilya/pubs/2011/LANG-RNN.pdf, uses bits per character as a test metric for estimating the quality of generative computer models of text but doesn't reference how it was calculated. Googling around, I can't really find anything about it.

Does anyone know how to calculate it? Python preferably, but pseudo-code or anything works. Thanks!

Was it helpful?

Solution

Bits per character is a measure of the performance of compression methods. It's applied by compressing a string and then measuring how many bits the compressed representation takes in total, divided by how many symbols (i.e. characters) there were in the original string. The fewer bits per character the compressed version takes, the more effective the compression method is.

In other words, the authors use their generative language model, among other things, for compression and make an assumption that a high effectiveness of the resulting compression method indicates high accuracy of the underlying generative model.

In section 1 they state:

The goal of the paper is to demonstrate the power of large RNNs trained with the new Hessian-Free optimizer by applying them to the task of predicting the next character in a stream of text. This is an important problem because a better character-level language model could improve compression of text files (Rissanen & Langdon, 1979) [...]

The Rissanen & Langdon (1979) article is the original description of arithmetic coding, a well-known method for text compression.

Arithmetic coding operates on the basis of a generative language model, such as the one the authors have built. Given a (possibly empty) sequence of characters, the model predicts what character may come next. Humans can do that, too, for example given the input sequence hello w, we can guess probabilities for the next character: o has high probability (because hello world is a plausible continuation), but characters like h as in hello where can I find.. or i as in hello winston also have non-zero probability. So we can establish a probability distribution of characters for this particular input, and that's exactly what the authors' generative model does as well.

This fits naturally with arithmetic coding: Given an input sequence that has already been encoded, the bit sequence for the next character is determined by the probability distribution of possible characters: Characters with high probability get a short bit sequence, characters with low probability get a longer sequence. Then the next character is read from the input and encoded using the bit sequence that was determined from the probability distribution. If the language model is good, the character will have been predicted with high probability, so the bit sequence will be short. Then the compression continues with the next character, again using the input so far to establish a probability distribution of characters, determining bit sequences, and then reading the actual next character and encoding it accordingly.

Note that the generative model is used in every step to establish a new probability distribution. So this is an instance of adaptive arithmetic coding.

After all input has been read and encoded, the total length (in bits) of the result is measured and divided by the number of characters in the original, uncompressed input. If the model is good, it will have predicted the characters with high accuracy, so the bit sequence used for each character will have been short on average, hence the total bits per character will be low.


Regarding ready-to-use implementations

I am not aware of an implementation of arithmetic coding that allows for easy integration of your own generative language model. Most implementations build their own adaptive model on-the-fly, i.e. they adjust character frequency tables as they read input.

One option for you may be to start with arcode. I looked at the code, and it seems as though it may be possible to integrate your own model, although it's not very easy. The self._ranges member represents the language model; basically as an array of cumulative character frequencies, so self._ranges[ord('d')] is the total relative frequency of all characters that are less than d (i.e. a, b, c if we assume lower-case alphabetic characters only). You would have to modify that array after every input character and map the character probabilities you get from the generative model to character frequency ranges.

OTHER TIPS

In "Generating Sequences With Recurrent Neural Networks" by Alex Graves (2014) it is given as: -log p(X(t+1) | y(t)) over the whole dataset. In this X(t+1) is the correct symbol and y(t) is the output of the algorithm. This conditional probability is the one that you assign to the correct answer.

Therefore if the output of your system is probabilistic, it is the average predictive power.

The sys library has a getsizeof() function, this may be helpful? http://docs.python.org/dev/library/sys

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top