RNNs, LSTMs and Deep Learning are all the rage, and a recent blog post by Andrej Karpathy is doing a great job explaining what these models are and how to train them. It also provides some very impressive results of what they are capable of. This is a great post, and if you are interested in natural language, machine learning or neural networks you should definitely read it.
Go read it now, then come back here.
You're back? good. Impressive stuff, huh? How could the network learn to immitate the input like that? Indeed. I was quite impressed as well.
However, it feels to me that most readers of the post are impressed by the wrong reasons. This is because they are not familiar with unsmoothed maximum-liklihood character level language models and their unreasonable effectiveness at generating rather convincing natural language outputs.
In what follows I will briefly describe these character-level maximum-likelihood langauge models, which are much less magical than RNNs and LSTMs, and show that they too can produce a rather convincing Shakespearean prose. I will also show about 30 lines of python code that take care of both training the model and generating the output. Compared to this baseline, the RNNs may seem somehwat less impressive. So why was I impressed? I will explain this too, below.
The name is quite long, but the idea is very simple. We want a model whose job is to guess the next character based on the previous $n$ letters. For example, having seen ello
, the next characer is likely to be either a commma or space (if we assume is is the end of the word "hello"), or the letter w
if we believe we are in the middle of the word "mellow". Humans are quite good at this, but of course seeing a larger history makes things easier (if we were to see 5 letters instead of 4, the choice between space and w
would have been much easier).
We will call $n$, the number of letters we need to guess based on, the order of the language model.
RNNs and LSTMs can potentially learn infinite-order language model (they guess the next character based on a "state" which supposedly encode all the previous history). We here will restrict ourselves to a fixed-order language model.
So, we are seeing $n$ letters, and need to guess the $n+1$th one. We are also given a large-ish amount of text (say, all of Shakespear works) that we can use. How would we go about solving this task?
Mathematiacally, we would like to learn a function $P(c | h)$. Here, $c$ is a character, $h$ is a $n$-letters history, and $P(c|h)$ stands for how likely is it to see $c$ after we've seen $h$.
Perhaps the simplest approach would be to just count and divide (a.k.a maximum likelihood estimates). We will count the number of times each letter $c'$ appeared after $h$, and divide by the total numbers of letters appearing after $h$. The unsmoothed part means that if we did not see a given letter following $h$, we will just give it a probability of zero.
And that's all there is to it.
Here is the code for training the model. fname
is a file to read the characters from. order
is the history size to consult. Note that we pad the data with leading ~
so that we also learn how to start.
from collections import *
def train_char_lm(fname, order=4):
data = file(fname).read()
lm = defaultdict(Counter)
pad = "~" * order
data = pad + data
for i in xrange(len(data)-order):
history, char = data[i:i+order], data[i+order]
lm[history][char]+=1
def normalize(counter):
s = float(sum(counter.values()))
return [(c,cnt/s) for c,cnt in counter.iteritems()]
outlm = {hist:normalize(chars) for hist, chars in lm.iteritems()}
return outlm
Let's train it on Andrej's Shakespears's text:
!wget http://cs.stanford.edu/people/karpathy/char-rnn/shakespeare_input.txt
lm = train_char_lm("shakespeare_input.txt", order=4)
Ok. Now let's do some queries:
lm['ello']
lm['Firs']
lm['rst ']
So ello
is followed by either space, punctuation or w
(or r
, u
, n
), Firs
is pretty much deterministic, and the word following ist
can start with pretty much every letter.
Generating is also very simple. To generate a letter, we will take the history, look at the last $order$ characteters, and then sample a random letter based on the corresponding distribution.
from random import random
def generate_letter(lm, history, order):
history = history[-order:]
dist = lm[history]
x = random()
for c,v in dist:
x = x - v
if x <= 0: return c
To generate a passage of $k$ characters, we just seed it with the initial history and run letter generation in a loop, updating the history at each turn.
def generate_text(lm, order, nletters=1000):
history = "~" * order
out = []
for i in xrange(nletters):
c = generate_letter(lm, history, order)
history = history[-order:] + c
out.append(c)
return "".join(out)
lm = train_char_lm("shakespeare_input.txt", order=2)
print generate_text(lm, 2)
Not so great.. but what if we increase the order to 4?
lm = train_char_lm("shakespeare_input.txt", order=4)
print generate_text(lm, 4)
lm = train_char_lm("shakespeare_input.txt", order=4)
print generate_text(lm, 4)
This is already quite reasonable, and reads like English. Just 4 letters history! What if we increase it to 7?
lm = train_char_lm("shakespeare_input.txt", order=7)
print generate_text(lm, 7)
lm = train_char_lm("shakespeare_input.txt", order=10)
print generate_text(lm, 10)
With an order of 4, we already get quite reasonable results. Increasing the order to 7 (~word and a half of history) or 10 (~two short words of history) already gets us quite passable Shakepearan text. I'd say it is on par with the examples in Andrej's post. And how simple and un-mystical the model is!
Generating English a character at a time -- not so impressive in my view. The RNN needs to learn the previous $n$ letters, for a rather small $n$, and that's it.
However, the code-generation example is very impressive. Why? because of the context awareness. Note that in all of the posted examples, the code is well indented, the braces and brackets are correctly nested, and even the comments start and end correctly. This is not something that can be achieved by simply looking at the previous $n$ letters.
If the examples are not cherry-picked, and the output is generally that nice, then the LSTM did learn something not trivial at all.
Just for the fun of it, let's see what our simple language model does with the linux-kernel code:
!wget http://cs.stanford.edu/people/karpathy/char-rnn/linux_input.txt
lm = train_char_lm("linux_input.txt", order=10)
print generate_text(lm, 10)
lm = train_char_lm("linux_input.txt", order=15)
print generate_text(lm, 15)
lm = train_char_lm("linux_input.txt", order=20)
print generate_text(lm, 20)
print generate_text(lm, 20)
print generate_text(lm, 20, nletters=5000)
Order 10 is pretty much junk. In order 15 things sort-of make sense, but we jump abruptly between the and by order 20 we are doing quite nicely -- but are far from keeping good indentation and brackets.
How could we? we do not have the memory, and these things are not modeled at all. While we could quite easily enrich our model to support also keeping track of brackets and indentation (by adding information such as "have I seen ( but not )" to the conditioning history), this requires extra work, non-trivial human reasoning, and will make the model significantly more complex.
The LSTM, on the other hand, seemed to have just learn it on its own. And that's impressive.