Question

I want to create a Jison (Bison) grammar for a markup language that allows escaping of markup delimiters.

These would be valid:

I like apples
I like [apples, oranges, pears]
I like [apples, oranges, pears] and [peanut butter, jelly]
I like [apples, oranges, pears] \[when they're in season\]
I like emoticons :-\]

The examples would be interpreted perhaps as the following (in JSON representation):

["I like apples"]
["I like ", ["apples", "oranges", "pears"]]
["I like ", ["apples", "oranges", "pears"], " and ", ["peanut butter", "jelly"]]
["I like ", ["apples", "oranges", "pears"], " [when they're in season]"]
["I like emoticons :-]"]

Escaping of []\, is the minimum, but it probably makes sense to allow any printable character be escaped, even if the escaping is unnecessary.

It'd be nice if escaping non-printable characters would be unsupported. That is, a \ at the end of a line would be illegal. That might come free with the regex . as it might not include newlines, but it should also happen for other unprintable characters too.

It is difficult to google for this because it's mixed up with a lot of results for escaping literal characters in the Bison definition, etc.

What is the most elegant way to support escape characters in a Bison-defined language?

EDIT

Here's what I have so far and can be tested on Jison, but it isn't working (just returns 1 if anything) and I don't expect that the text will arrive unescaped - that would require a second pass. Is that avoidable?

/* description: markup */

/* lexical grammar */
%lex
%%

(\\.|[^\\\[])+            return 'TOPTEXT'
(\\.|[^\\\[\]\,])+        return 'TEXT'
\-?[0-9]+("."[0-9]+)?\b   return 'NUMBER'
".."|"-"                  return '..'
"["                       return '['
"]"                       return ']'
","                       return ','
<<EOF>>                   return 'EOF'

/lex

%start markup

%%

markup
    : template EOF
        { return $template; }
    ;

template
    : template TOPTEXT
        { $$ = $template.push($TOPTEXT); }
    | template dynamic
        { $$ = $template.push($dynamic); }
    | /* empty */
        { $$ = []; }
    ;

dynamic
    : '[' phraselist ']'
        { $$ = $phraselist; }
    ;

phraselist
    : phraselist ',' phrase
        { $$ = $phraselist.push($phrase); }
    | /* empty */
        { $$ = []; }
    ;

phrase
    : TEXT
        { $$ = $phrase.push($TEXT); }
    | phrase dynamic
        { $$ = $phrase.push($dynamic); }
    | /* empty */
        { $$ = []; }
    ;
Was it helpful?

Solution

I think there are more than one problem with your code.

The first (and this explains the 1 output) is that [].push returns the new length of the list, so what you want is probably push, and then define the value:

template
: template TOPTEXT
    { $template.push($TOPTEXT); $$ = $template; }
| template dynamic
    { $template.push($dynamic); $$ = $template; }
| /* empty */
    { $$ = []; }
;

The other thing is, it seems that you are trying to get too much things to work at the same time, without being really sure that you want them or that they are actually working the way they are supposed to.

Maybe a better strategy would be for you to start small, making it work one rule at the time, starting from the basics.

For example, you could first make sure that you have the lexer working for every case, testing with a simple grammar that just prints out the tokens:

%lex
%%

(\\\\|\\\[|\\\]|\\\,|[^,\\\[\]])+   return 'TEXT'
\-?[0-9]+("."[0-9]+)?\b             return 'NUMBER'
".."|"-"                            return 'RANGE'
"["                                 return '['
"]"                                 return ']'
","                                 return ','

/lex

%start lexertest

%%

lexertest:
token lexertest
| /* empty */
;

token:
TEXT    { console.log("Token TEXT: |" + $TEXT +  "|"); }
|
NUMBER  { console.log("Token NUMBER: |" + $NUMBER +  "|"); }
|
'['     { console.log("Token ["); }
|
']'     { console.log("Token ]"); }
|
','     { console.log("Token ,"); }
|
'RANGE' { console.log("Token RANGE: |" + $1 +  "|"); }
;

Note: When running in the browser, the console.log output will be only in the developer tools. You may find using Jison in the command line with a script like this (for Bash) can be easier to test with several inputs.

Then you refine it, until you are content with it. After you are content with the lexer, then you start making the grammar to work, again testing one rule at the time. Keep the rules above for whenever you want to debug the output of the lexer, you can just change the %start rule.

In the end, you may well discover that you never needed EOF in the first place, and that maybe you won't need two different rules for matching the free text, after all.

Hope it helps.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top