Question

I was adding zero-or-more and one-or-more modifiers to my PEG parser, which is straightforward since there is so little backtracking in PEG. Earlier iterations are never reconsidered, so a simple while loop suffices.

However, in other contexts, zero-or-more and one-or-more modifiers do require backtracking. For example, take the following regular expression:

(aa|aaa)+

This expression should be able to greedily match a string of seven a's: there are several ways to add up 2 and 3 to get 7. But to get there, reconsidering earlier iterations is necessary. For instance, if the expression matches three a's the first time and three a's the second time, only one a remains, which cannot be matched. Backtrack the last three a's and match two a's instead, however, and five a's are matched. Then the last two a's can be matched too (i.e., 3 + 2 + 2 = 7).

Fortunately, the regular expression quits its search once it has matched the string. But what about an EBNF parser? If the grammar is ambiguous, the parser is to use backtracking to find all possible syntax trees! If we have the production

( "aa" | "aaa" )*

and a string of seven a's, a fully backtracking parser would return all possible ways of expressing 7 in terms of 2 and 3. And that's just for seven a's: match a slightly longer string, and the N-ary tree of possibilities grows another level. Consider N = 6:

S : ( T )*
  ;

T : A
  | B
  | C
  | D
  | E
  | F
  ;

A terrifying combinatorial explosion!

Could this really be the case, though? Are there no restrictions on the zero-or-more and one-or-more modifiers in EBNF? Implementing them as described would be a lot more work than the plain while() loop of the PEG parser, so I have to wonder ...

Was it helpful?

Solution

yes; backtracking can give you a lot of results. i'm the author of lepl, which is a recursive decent parser that will happily backtrack and produce a "parse forest" of all possible ASTs. and there is no restriction in EBNF (which is just a specification language and not tied to any particular parser implementation).

but not all parsing algorithms backtrack. many implementations of regular expressions do so, but it is not always necessary. in fact, for a "simple" regular expression (one that really is confined to regular grammars) it is possible to match without backtracking at all - the trick is to, in a sense, run things in parallel.

there are two (equivalent) ways of doing this - either by "compiling" the regular expression (working out what the expression would be if the work in parallel was explicit), or by juggling parallel matches at runtime. the compilation approach translates the regular expression to a DFA (deterministic finite automaton). more exactly, a NFA (non-deterministic...) is vaguely like a graph version of the regex, and is probably how you imagine regular expressions work; matching with a NFA does require backtracking, but you can translate the NFA into a DFA which does not.

however, doing this at runtime is easier to understand (and tends to be more useful in practice) and is explained in three awesome articles which you should really read if you want to understand this better: http://swtch.com/~rsc/regexp/regexp3.html and the links at the start of that.

i cannot emphasize this enough - you need to read those articles...

ps vaguely related - you can make backtracking more efficient by caching results that you might need again later (when you end up arriving at the same text and expression via a different route). this is called "packrat parsing" when applied to recursive decent parsing (although to be honest it's not worth a separate name - it really is just using a cache). the caching avoids exponential runtimes - there's a paper somewhere by norvig (the guy at google, but this was written way before) that explains that: http://acl.ldc.upenn.edu/J/J91/J91-1004.pdf

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top