Question

I'm seen many examples of using Markov chains for generating random words based on source data, but they often seem a bit overly mechanical and abstract to me. I'm trying to develop a better one.

I believe part of the problem is that they rely entirely on the overall statistical occurrence of pairs, and ignore the tendency of words to start and end in certain ways. For example, if you use the top 1000 baby names as source data, the letter J is relatively rare overall, yet it's the second most common letter for names to start with. Or, if you're using Latin source data, word endings like -um and -us would be common endings, but not as common if you consider all pairs the same.

So, I'm basically trying to put together a Markov chain based word generator that takes into account the way words start and end in the source data.

Conceptually, that makes sense to me, yet I can't figure out how to implement this from a software perspective. I'm trying to put together a little PHP tool that allows you to drop in source data (e.g., a list of 1000 words) from which it will then generate a variety of random words with realistic starts, middles, and endings. (As opposed to most Markov-based word generators, which are just based on the statistical occurrence of pairs overall.)

I'd also like to do this with word length determined by the source data, if possible; i.e., the length breakdown of the randomly generated words should be approximately the same as the length breakdown of the source data.

Any ideas would be massively appreciated! Thanks.

Was it helpful?

Solution

The part about not respecting common beginnings and endings isn't actually true if you consider "space between words" to be a symbol -- common beginnings will have high frequencies following "space between words" and common endings will have high frequencies preceding "space between words". Correct word length also settles out of that more-or-less naturally -- the mean number of letters you output before transitioning to a "space between words" symbol should equal the mean number of letters per word in the training data, although something in the back of my mind is telling me that the distribution might be off.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top